text
stringlengths
100
500k
subset
stringclasses
4 values
Absence of association between whole blood viscosity and delirium after cardiac surgery: a case-controlled study Shokoufeh CheheiliSobbi1,2,3, Mark van den Boogaard1, Arjen J. C. Slooter4, Henry A. van Swieten2, Linda Ceelen1, Gheorghe Pop3, Wilson F. Abdo1 & Peter Pickkers1 Journal of Cardiothoracic Surgery volume 11, Article number: 132 (2016) Cite this article Delirium after cardiothoracic surgery is common and associated with impaired outcomes. Although several mechanisms have been proposed (including changes in cerebral perfusion), the pathophysiology of postoperative delirium remains unclear. Blood viscosity is related to cerebral perfusion and thereby might contribute to the development of delirium after cardiothoracic surgery. The aim of this study was to investigate whether whole blood viscosity differs between cardiothoracic surgery patients with and without delirium. In this observational study postoperative whole blood viscosity of patients that developed delirium (cases) were compared with non-delirious cardiothoracic surgery patients (controls). Cases were matched with the controls, yielding a 1:4 case–control study. Serial hematocrit, fibrinogen, and whole blood viscosity were determined pre-operatively and at each postoperative day. Delirium was assessed using the validated Confusion Assessment Method for the Intensive Care Unit or Delirium Screening Observation scale. In total 80 cardiothoracic surgery patients were screened of whom 12 delirious and 48 matched non-delirious patients were included. No significant difference was found between both groups in fibrinogen (p = 0.36), hematocrit (p = 0.23) and the area under curve of the whole blood viscosity between shear rates 0.02 and 50 s-1 (p = 0.80) or between shear rates 0.02 and 5 s-1 (p = 0.78). In this case control study in cardiothoracic surgery patients changes in whole blood viscosity were not associated with the development of delirium. Delirium is a serious neuropsychiatric disorder characterized by an acute onset of altered mental status, hallmarked by difficulty in sustaining attention with typically a fluctuating course [1]. Delirium occurs frequently in hospitalized patients, especially in Intensive Care Unit (ICU) patients [2]. In cardiothoracic surgery the incidence of delirium during the postoperative on the ICU is reported to be between 13 % and 42 % [3, 4]. Postoperative delirium in cardiothoracic surgery patients is associated with increased length of ICU and hospital stay, increased risk of sternal wound infection, unwanted removal of arterial/venous lines or epicardial electrodes, significantly impaired quality of life and higher long-term morbidity, mortality and healthcare costs [5, 6]. The pathophysiological mechanism of delirium is far from clear [7]. Apart from other possible pathways related to the development of delirium [8], reduced cerebral blood flow during delirium with normalization during recovery has been reported [9, 10]. Cerebral blood flow is strongly related to whole blood viscosity (WBV) [11]. Changes in blood viscosity occur post-cardiothoracic surgery [12]. As such, changes in blood viscosity could relate to occurrence of postoperative delirium and this could represent an important interventional target to prevent or treat delirium. Blood viscosity is higher at low shear rates, e.g. in the microcirculation [13]. Therefore, an increased WBV leads to a larger reduction in microcirculatory blood flow compared to blood flow in larger blood vessels. Since cellular perfusion is dependent on microcirculatory flow [14] and blood viscosity affects microcirculatory flow, we hypothesized that changes in viscosity could be related to the development of delirium. The aim of our study was to investigate whether whole blood viscosity differs between cardiothoracic surgery patients with and without delirium. Study design and patients This is an exploratory, matched, case–control study carried out in the Radboud University Medical Center, Nijmegen, The Netherlands. Annually approximately 1000 cardiothoracic patients are operated in the RadboudUMC. This study was approved by the medical ethical committee of Arnhem-Nijmegen (study number 2012/297) which waived the need for informed consent. The study population consisted of patients of 50 years or older after an elective cardiothoracic surgical on-pump procedure for coronary artery bypass grafting (CABG) or single heart valve surgery. For purpose of homogeneity of the total group, patients who underwent CABG combined with valve surgery were not included. Other exclusion criteria were the use of blood cardioplegia, since this is associated with the development of postoperative neurological events [15], preoperative use of heparin, since heparin could decrease the blood viscosity [16], extracorporeal circulation (ECC) time exceeding 120 min, because ECC time is associated with neurological injury [17], or inability to screen for delirium. Patients developing delirium postoperatively were defined as cases, and patients in which no delirium occurred served as non-cases. The group of cases was matched 1 : 4 to controls to increase the power of the study. Matching was performed on several important preoperative and postoperative risk factors for the development of delirium [7]: gender, age, duration of surgery, aortic cross clamp (AOX) time, ECC time, severity of illness score (Acute Physiology and Chronic Health Evaluation (APACHE)-II score), and risk of death after a heart operation (European System for Cardiac Operative Risk Evaluation (Euro score)). Delirium screening Delirium assessment was performed three times a day. In order to obtain maximal sensitivity and specificity we used a three way approach to diagnose delirium. Firstly, in the ICU the most specific and sensitive scoring test for delirium, the validated Confusion Assessment Method for the ICU (CAM-ICU), was used by trained ICU nurses [18]. Secondly, for the non-ICU patients, on the cardiothoracic surgical ward, nurses used the validated Delirium Screening Observation (DOS) scale [19]. Unfortunately some patients are not diagnosed by validated tests [20]. Therefore, to not miss these patients nursing and medical files were screened for signs of delirium as the last approach [21]. Delirium was defined as having a positive CAM-ICU score, or a DOS scale ≥3. In order to maximize the sensitivity of the diagnosis delirium, we also checked the medical records when haloperidol was administered for treatment combined with delirium signs noted in the nursing or medical files in case of a negative CAM-ICU score. Data collection and variables Earlier studies show that changes in blood viscosity occurs immediately after induction of anesthesia, immediately after surgery, 1 and 2 days after surgery and normalizes between 3 and 4 days post-cardiothoracic surgery [12]. For this reason blood samples were collected at four time points: preoperatively, directly after the induction of anesthesia (T-1), within one hour of ICU admission (T0), day one (T24) and three days (T72) after cardiothoracic surgery. Blood was drawn from the central venous catheter. If this was not possible, blood was taken from an indwelling arterial catheter or by vena puncture. During each blood collection the most important determinants of WBV, hematocrit and serum fibrinogen, were also measured and taken into consideration during viscosity calculations [11, 22]. Also presence of diabetes mellitus, infection confirmed by appropriate culture, invasive mechanical ventilation and the mean of the following variables during postoperative ICU stay in both groups were registered: serum creatinine level, modification of diet in renal disease-glomerular filtration rate (MDRD-GFR), urea level, fluid balance, ejection fraction (EF), mean arterial blood pressure (MAP), infusion rate of inotropes or vasopressors, partial thromboplastin time (PTT), glucose level, and temperature. Whole blood viscosity measurement WBV is the intrinsic resistance of blood as it flows through blood vessels and is mainly determined by the shear rate of the flow, the volume fraction of red blood cells (hematocrit (Hct)), the concentration of plasma proteins namely fibrinogen, red blood cell (RBC) aggregation and red cell deformation [10, 23]. Viscosity can be represented as a function between shear rate and shear stress. Shear rate indicates the velocity of the blood flow and shear stress is the force of blood against the vessel wall. Fibrinogen has a greater influence on whole blood viscosity at low shear rates than at high shear rates due to fibrinogen induced RBC aggregation at low shear rates [24]. The interaction of fibrinogen and hematocrit on viscosity can be represented by an estimate of yield shear stress (YSS). YSS is the force required to start movement in a blood vessel [11, 22]. Furthermore, blood viscosity is dependent on temperature, especially at a temperature below 35 °C and above 39 °C [25, 26]. WBV was measured using the Contraves LS300 Low Shear Viscometer (ProRheo, Germany) within 180 min after blood sampling. The setting of the viscometer was standardized for all samples . Briefly, all blood tubes were placed on a shaker in the time between blood collection and the viscosity measurement. The viscosity was measured at 37 ± 0.1 °C, and at 23 different clinically relevant shear rate intervals (0.02-50 s-1) to minimize measurement errors [24]. As a measure for WBV, the area under the viscosity-shear rate curve (AUC) was used. The AUCs' between the shear rates 0.02 and 5 s-1 were adjusted for Hct since this has a major impact on blood viscosity at low shear rates [11, 24]. Adjustment was performed by dividing the blood viscosity between the shear rates 0.02 and 5 s-1 by Hct. This is an estimation of adjusted whole blood viscosity for Hct approximating the precise value [27]. In addition, the yield shear stress was analyzed to compare the influence of Hct and fibrinogen on WBV. YSS was calculated according to Equation 1 [11, 22]. $$ \mathrm{Y}\mathrm{S}\mathrm{S}=13.5\left({10}^{-6}\right){C}_f^2{\left(Hct-6\right)}^3 $$ Where Cf is the fibrinogen concentration in mg%. We used a case:control ratio of 1:4, which resulted in a power of 94 %, with a two tailed alpha of 0.05. Student's t-tests or Mann–Whitney U tests were used depending on data distribution. The Chi-square test was used to test the dichotomous variables. Because of the high level of attrition Linear Mixed Model testing was used to study the association between blood viscosity, hematocrit, fibrinogen and delirium. A two tailed p value of <0.05 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics 20 and GraphPad Prism 5.0 (Graphpad Software, San Diego, CA, USA). In total 80 cardiothoracic surgical patients were screened. Of these, 16 (20 %) developed delirium postoperatively. One non-delirious patient was excluded due to serious complications and sustained coma, five non-delirious and four delirious patients were excluded because of missing data. Subsequently 12 cases were matched with 48 non-cases. Nine patients developed delirium within 24 h, and 3 patients developed delirium within 72 h after surgery. Patient and demographic characteristics are depicted in Table 1. Table 1 Demographic variables of delirious and non-delirious patients Postoperative levels of fibrinogen, hematocrit and whole blood viscosity Pre-operative fibrinogen, Hct and WBV were comparable between groups (Fig. 1). In both groups fibrinogen levels and hematocrit decreased significantly after surgery (both p <0.001). No significant difference was found between both groups in reduction of fibrinogen (p = 0.36) and hematocrit (p = 0.23), Fig. 1. Postoperatively the AUC of WBV between shear rates 0.02 and 50 s-1 decreased significantly in both groups (p < 0.001). Again, there was no significant difference in this reduction between patients that developed delirium and those who did not (p = 0.80). The AUC of WBV between shear rates 0.02 and 5 s-1 remained similar over time in both groups and not different between both groups (p = 0.78) either. The AUC of blood viscosity adjusted for hematocrit was also comparable between the patients who developed delirium and those who did not (p = 0.33), (Figs. 1and 2). Finally, changes in the YSS were also not different between both groups (p = 0.68). Postoperative levels of hematocrit, fibrinogen and whole blood viscosity of delirious and non-delirious patients. a Hematocrit. b Fibrinogen. c The area under curve (AUC) of the whole blood viscosity (WBV) between shear rates 0.02 and 5 s-1. d The area under curve (AUC) of the whole blood viscosity (WBV) between shear rates 0.02 and 50 s-1. e The area under curve (AUC) of the whole blood viscosity corrected for hematocrit. T-1, directly after the induction of anesthesia. T0, within one hour of Intensive Care Unit admission. T24, one day after cardiothoracic surgery. T72, three days after cardiothoracic surgery. Data are expressed as mean and SD. Linear Mixed Model testing was used to determine a difference between the two groups. No significant differences were found. A two tailed p value of < 0.05 was considered statistically significant Whole blood viscosity at 23 different shear rates between 0.02 and 50 s-1 of delirious and non-delirious patients. Log10 of whole blood viscosity versus log10 of shear rates. a Directly after the induction of anesthesia (T-1). b Within one hour of Intensive Care Unit admission (T0). C One day after cardiothoracic surgery (T24). d Three days after cardiothoracic surgery (T72) Changes in the Hct, fibrinogen, WBV and YSS did not differ significantly in patients that developed delirium within 24 h and those that developed delirium after 48 h. We hypothesized that development of delirium in post-cardiothoracic surgery patients could be related to changes in blood viscosity. In line with such a hypothesis is the fact that reduced cerebral blood flow has been suggested to be a possible pathway for the occurrence of delirium. In addition, strokes and dementia are correlated with occurrence of delirium. Both strokes and dementia have been associated with high blood viscosity [28, 29] and both occur more frequently after cardiothoracic surgery. In this exploratory case–control study we did not find differences in the changes in hematocrit, fibrinogen levels or whole blood viscosity in post-cardiothoracic surgery patients who did or did not develop delirium. These findings indicate that changes in viscosity during the postoperative phase do not play a role in the development of post-operative delirium in cardiothoracic surgery patients. Even more so, the observed postoperative decrease of Hct and WBV provides no protection against delirium. Although several studies have shown a strong correlation between whole blood viscosity and cerebral flow [11, 24], it appears plausible that due to low hematocrit levels after cardiothoracic surgery, the viscosity level is already so low, that moderate changes at this low level of blood viscosity do not affect cerebral blood flow. It has been shown, that a logarithmic correlation exists between hematocrit and whole blood viscosity, which is even stronger at lower shear rates [30]. Another reason why our hypothesis was not confirmed could be, that cerebral blood flow and delirium are not strongly correlated. However, earlier studies have demonstrated a correlation between decreased cerebral blood flow and delirium [9, 10]. The postoperative changes in WBV we observed are in accordance with previous reports in cardiothoracic surgery patients [12, 31]. However, in those studies, viscocity was only measured at a high shear rate (90 s-1), while in our study we measured viscosity at 23 different and at low shear rates. We found no postoperative changes in WBV at shear rates between 0.02 and 5 s-1 in cardiothoracic surgery patients. At low shear rates, red blood cells clump together due to fibrinogen induced RBC aggregation resulting in a higher viscosity which is dependent on both Hct and fibrinogen concentration [11, 24]. Therefore, unlike Papp et al. [12], we took a small part of the blood viscosity curve at low shear rates (between 0.02 and 5 s-1) and corrected for hematocrit. Nevertheless, there was no significant difference between delirious and non-delirious patients. Although duration of delirium is not a true reflection of severity of delirium, this is in several studies used as a measure for delirium severity. However, in our study the maximum delirium duration was 3 days (1 patient) and the rest had 1–2 days of delirium. For this reason we have treated delirium as a dichotomous variable. Data from other studies show that the median duration of delirium in cardiac surgery patients is two days [3, 32]. This is comparable to our data. Some limitations of this study need to be considered. Firstly, the gold standard to diagnose delirium was not used. The gold standard is clinical research, performed by a psychiatrist, a neuropsychologist or a geriatrician. Delirium fluctuates during the day, therefore the gold standard is not always practical [33]. Instead of the gold standard, the internationally validated CAM-ICU and DOS scale performed by nurses were used to enable multiple assessments per day per patient [19]. At each nursing shift, patients were screened for delirium. Using multiple assessments per day increases the sensitivity of delirium diagnosis. In addition, in order to minimize under-diagnosis, the reports of the doctors and nurses were analyzed for indications of delirium in combination with the use of anti-psychotics. Secondly, in this study we included both the CABG as well as the aortic valve surgery patients. In comparison to closed heart surgery, patients undergoing open heart surgery have an increased risk for cerebral embolization [34, 35]. Although, the latter introduces heterogeneity, it results in data that can be generalized more easily to the daily practice of cardiac surgery. Finally, in this study 25 % of the data is missing. However, we used Linear Mixed Models testing in which it is allowed to have 25–30 % of missing [36]. In this group of cardiothoracic surgery patients no association was found between whole blood viscosity and the development of post-operative delirium. This finding indicates that in postoperative cardiothoracic surgery patients, delirium is probably not related to blood viscosity changes. AOX, aortic cross clamp; APACHE, acute physiology and chronic health evaluation; AUC, area under the curve; CABG, coronary artery bypass grafting; CAM-ICU, confusion assessment method intensive care unit; Cf, fibrinogen concentration; DOS, delirium screening observation; ECC, extracorporeal circulation; EF, ejection fraction; Euro score, European system for cardiac operative risk evaluation; Hct, hematocrit; ICU, intensive care unit; IQR, interquartile range; MAP, mean arterial blood pressure; MDRD-GFR, modification of diet in renal disease-glomerular filtration rate; N/A, not applicable; PTT, partial thromboplastin time; RBC, red blood cell; SD, standard deviation; WBV, whole blood viscosity; YSS, yield shear stress. van den Boogaard M, Schoonhoven L, Maseda E, Plowright C, Jones C, Luetz A, Sackey PV, Jorens PG, Aitken LM, van Haren FM, Donders R, van der Hoeven JG, Pickkers P. Recalibration of the delirium prediction model for ICU patients (PRE-DELIRIC): a multinational observational study. Intensive Care Med. 2014;40(3):361–9. doi:10.1007/s00134-013-3202-7. Dubois MJ, Bergeron N, Dumont M, Dial S, Skrobik Y. Delirium in an intensive care unit: a study of risk factors. Intensive Care Med. 2001;27(8):1297–304. van den Boogaard M, Schoonhoven L, van der Hoeven JG, van Achterberg T, Pickkers P. Incidence and short-term consequences of delirium in critically ill patients: a prospective observational cohort study. Int J Nurs Stud. 2012;49(7):775–83. doi:10.1016/j.ijnurstu.2011.11.016. Epub 2011 Dec 22. Koster S, Hensens AG, Schuurmans MJ, van der Palen J. Risk factors of delirium after cardiac surgery: a systematic review. Eur J Cardiovasc Nurs. 2011;10(4):197–204. doi:10.1016/j.ejcnurse.2010.09.001. Epub 2010 Sep 25. Li HC, Chen YS, Chiu MJ, Fu MC, Huang GH, Chen CC. Delirium, Subsyndromal Delirium, and Cognitive Changes in Individuals Undergoing Elective Coronary Artery Bypass Graft Surgery. J Cardiovasc Nurs. 2015;30(4):340–5. doi:10.1097/JCN.0000000000000170. Krähenbühl ES, Immer FF, Stalder M, Englberger L, Eckstein FS, Carrel TP. Temporary neurological dysfunction after surgery of the thoracic aorta: a predictor of poor outcome and impaired quality of life. Eur J Cardiothorac Surg. 2008;33(6):1025–9. doi:10.1016/j.ejcts.2008.01.058. Epub 2008 Mar 17. Van Rompaey B, Schuurmans MJ, Shortridge-Baggett LM, Truijen S, Bossaert L. Risk factors for intensive care delirium: a systematic review. Intensive Crit Care Nurs. 2008;24(2):98–107. Epub 2007 Oct 18. Flacker JM, Lipsitz LA. Neural mechanisms of delirium: current hypotheses and evolving concepts. J Gerontol A Biol Sci Med Sci. 1999;54(6):B239–46. Gunther ML, Morandi A, Ely EW. Pathophysiology of Deliriumin in the Intensive Care Uni. Crit Care Clin. 2008;24(1):45–65. viii. Yokota H, Ogawa S, Kurokawa A, et al. Regional cerebral blood flow in delirium patients. Psychiatry Clin Neurosci. 2003;57(3):337–9. Grotta J, Ackerman R, Correia J, Fallick G, Chang J. Whole blood viscosity parameters and cerebral blood flow. Stroke. 1982;13:296–301 doi:10.1161/01. Papp J, Toth A, Sandor B, Kiss R, Rabai M, Kenyeres P, et al. The influence of on-pump and off-pump coronary artery bypass grafting on hemorheological parameters. Clin Hemorheol Microcirc. 2011;49(1–4):331–46. Papaioannou TG, Stefanadis C. Vascular Wall Shear Stress: Basic Principles and Methods. Hellenic J Cardiol. 2005;46:9–15. Cipolla JM. The cerebral circulation. University of Vermont College of Medicine. San Rafael (CA): Morgan & Claypool Life Sciences; 2009. Craver JM, Bufkin BL, Weintraub WS, Guyton RA. Neurologic events after coronary bypass grafting: further observations with warm cardioplegia. Ann Thorac Surg. 1995;59(6):1429–33. discussion 1433–4. Ruggiero HA, Castellanos H, Caprissi LF, Caprissi ES. Heparin effect on blood viscosity. Clin Cardiol. 1982;5(3):215–8. Czerny M, Krähenbühl E, Reineke D, Sodeck G, Englberger L, Weber A, Schmidli J, Kadner A, Erdoes G, Schoenhoff F, Jenni H, Stalder M, Carrel T. Mortality and neurologic injury after surgical repair with hypothermic circulatory arrest in acute and chronic proximal thoracic aortic pathology: effect of age on outcome. Circulation. 2011;124(13):1407–13. doi:10.1161/CIRCULATIONAHA.110.010124. Epub 2011 Aug 29. Ely EW, Margolin R, Francis J, May L, Truman B, Dittus R, et al. Evaluation of delirium in critically ill patients: validation of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU). Crit Care Med. 2001;29(7):1370–9. Schuurmans MJ, Shortridge-Baggett LM, Duursma SA. The Delirium Observation Screening Scale: a screening instrument for delirium. Res Theory Nurs Pract. 2003;17(1):31–50. Gusmao-Flores D, Salluh JIF, Chalhub RÁ, Quarantini LC. The confusion assessment method for the intensive care unit (CAM-ICU) and intensive care delirium screening checklist (ICDSC) for the diagnosis of delirium: a systematic review and meta-analysis of clinical studies. Crit Care. 2012;16(4):R115. 10.1186/cc11407. Inouye SK, Leo-summers ÃL, Zhang Y, Bogardus ST, Leslie DL, Agostini JV. A Chart-Based Method for Identification of Delirium : Validation Assessment Method. J Am Geriatr Soc. 2005;53(2):312–8. Merrill EW, Cheng CS, Pelletier GA. Yield Stress of Normal Human Blood as a Function of Endogenous Fibrinogen. J Appl Physiol. 1969;26(1):1–3. Fong TG, Bogardus ST, Daftary A. Interrelationship between delirium and dementia:cerebral perfusion changes in older delirious patients using 99mTc HMPAO SPECT. J Gerontol A Biol Sci Med Sci. 2006;61A:1294–9. Pop G, Bisschops LL, Iliev B, Struijk PC, van der Hoeven JG, Hoedemaekers CW. On-line blood viscosity monitoring in vivo with a central venous catheter, using electrical impedance technique. Biosens Bioelectron. 2013;41:595–601. doi:10.1016/j.bios. Ayres ML, Jarrett PEM, Browse NL. Blood viscosity, Raynaud's phenomenon and the effect of fibrinolytic enhancement. Br J Surg. 1981;68(1):51–4. Cinar Y, Senyol AM, Duman K. Blood viscosity and blood pressure: role of temperature and hyperglycemia. Am J Hypertens. 2001;14(5 Pt 1):433–8. Matrai A, Whittington RB, Ernst E. A simple method of estimating whole blood viscosity at standardized hematocrit. Clin Hemorheol. 1987;7:261–5. Park M-S, Kim B-C, Kim I-K, Lee S-H, Choi S-M, Kim M-K, Lee S-S, Cho K-H. Cerebral Infarction in IgG Multiple Myeloma with Hyperviscosity. J Korean Med Sci. 2005;20(4):699–701. doi:10.3346/jkms.2005.20.4.699. Grigg AP, Allardice J, Smith IL, Murray W, Horsfall D, Parkin D. Hyperviscosity syndrome in disseminated breast adenocarcinoma. Pathology. 1994;26(1):65–8. Aronson HB, Cotev S, Magora F, Borman JB, Merin G. Blood viscosity and open heart surgery. A comparison of values in systemic and pulmonary blood vessels. Br J Anaesth. 1974;46(10):722–5. Weaver JP, Evans A, Walder DN. The effect of increased fibrinogen content on the viscosity of blood. Clin Sci. 1969;36(1):1–10. Koster S, Oosterveld FG, Hensens AG, Wijma A, van der Palen J. Delirium after cardiac surgery and predictive validity of a risk checklist. Ann Thorac Surg. 2008;86(6):1883–7. doi:10.1016/j.athoracsur.2008.08.020. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). 4th ed. Washington D.C: American Psychiatric Association; 1994. Wolman RL, Nussmeier NA, Aggarwal A, Kanchuger MS, Roach GW, Newman MF, Mangano CM, Marschall KE, Ley C, Boisvert DM, Ozanne GM, Herskowitz A, Graham SH, Mangano DT. Cerebral injury after cardiac surgery: identification of a group at extraordinary risk. Multicenter Study of Perioperative Ischemia Research Group (McSPI) and the Ischemia Research Education Foundation (IREF) Investigators. Stroke. 1999;30(3):514–22. Ying Tan M, Amoako D. Postoperative cognitive dysfunction after cardiac surgery. Contin Educ Anaesth Crit Care Pain. 2013. doi: 10.1093/bjaceaccp/mkt022. Twisk JWR, Ellenberg SS, Elston R, Everitt B, Everitt BS, Harrell F. Applied Multilevel Analysis: A Practical Guide for Medical Researchers (Practical Guides to Biostatistics and Epidemiology). ISBN10-0521614988, ISBN13- 9780521614986. The authors would like to thank the nurses on the intensive care and medium care unit and cardiothoracic surgical ward for their help with the blood collections. They also would like to thank prof. dr. ir. Robert F. Mudde and L. Bergwerff MSc for their help with analysis of the data. Dr WF Abdo received financial support from the Netherlands Organization for Health Research and Development (ZonMW Clinical Fellowship number 90715610). All authors participated in design of the study, collection of the data, analysis of the data and critically reviewed the paper. All authors read and approved the final manuscript and gave consent for publication. Department of Intensive Care Medicine, Radboudumc, Nijmegen, The Netherlands Shokoufeh CheheiliSobbi, Mark van den Boogaard, Linda Ceelen, Wilson F. Abdo & Peter Pickkers Department of Cardiothoracic Surgery, Radboudumc, Nijmegen, The Netherlands Shokoufeh CheheiliSobbi & Henry A. van Swieten Department of Cardiology, Radboudumc, Nijmegen, The Netherlands Shokoufeh CheheiliSobbi & Gheorghe Pop Department of Intensive Care Medicine, University Medical Centre Utrecht, Utrecht, The Netherlands Arjen J. C. Slooter Shokoufeh CheheiliSobbi Mark van den Boogaard Henry A. van Swieten Linda Ceelen Gheorghe Pop Wilson F. Abdo Peter Pickkers Correspondence to Shokoufeh CheheiliSobbi. CheheiliSobbi, S., van den Boogaard, M., Slooter, A.J.C. et al. Absence of association between whole blood viscosity and delirium after cardiac surgery: a case-controlled study. J Cardiothorac Surg 11, 132 (2016). https://doi.org/10.1186/s13019-016-0517-9 Accepted: 27 July 2016 Whole blood viscosity
CommonCrawl
Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge The classification of constant weighted curvature curves in the plane with a log-linear density July 2014, 13(4): 1653-1667. doi: 10.3934/cpaa.2014.13.1653 Existence and uniqueness of a positive connection for the scalar viscous shallow water system in a bounded interval Marta Strani 1, Universitat Wuerzburg, Campus Hubland Nord, Emil-Fischer-Strasse 30, 97074 Wuerzburg, Germany Received September 2013 Revised January 2014 Published February 2014 We study the existence and the uniqueness of a {\bf positive connection}, that is a stationary solution connecting the boundary data, for the initial-boundary value problem for the viscous shallow water system \begin{eqnarray} \partial_t u + \partial_x v=0, \partial_t v+\partial_x( \frac{v^2}{u}+ P(u))= \varepsilon\partial_x ( u \partial_x(\frac{v}{u})) \end{eqnarray} in a bounded interval $(-l,l)$ of the real line. We firstly consider the general case where the term of pressure $P(u)$ satisfies \begin{eqnarray} P(0)=0, P(+\infty)=+\infty, P'(u) \quad and \quad P''(u)>0 \ \forall u >0, \end{eqnarray} and then we show properties of the steady state in the relevant case $P(u)=\kappa u^{\gamma}$, $\gamma>1$. The viscous Saint-Venant system, corresponding to $\gamma=2$, fits in the general framework. Keywords: Saint-Venant system, hyperbolic-parabolic systems, stationary solutions, Shallow water equations, positive connection.. Mathematics Subject Classification: Primary: 35A01, 35L50; Secondary: 34A0. Citation: Marta Strani. Existence and uniqueness of a positive connection for the scalar viscous shallow water system in a bounded interval. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1653-1667. doi: 10.3934/cpaa.2014.13.1653 G. Bastin, J. M. Coron and B. D'Andréa-Novel, On Lyapunov stability of linearized Saint-Venant equations for a sloping channel, Netw. Heterog. Media, 4 (2009), 177-187. doi: 10.3934/nhm.2009.4.177. Google Scholar D. Bresch, B. Desjardins B. and G. Métivier, Recent mathematical results and open problems about shallow water equations, Analysis and Simulation of Fluid Dynamics, Series in Advances in Mathematical Fluid Mechanics, Birkhauser Basel, (2006), pp. 15-31. doi: 10.1007/978-3-7643-7742-7_2. Google Scholar S. A. Chin-Bing, P. M. Jordam and A. Warm-Varnas, A note on the viscous, 1D shallow water equation: Traveling wave phenomena, Mech. Research Comm., 38 (2011), 382-387. Google Scholar C.M. Dafermos, Hyperbolic Systems of Conservation Laws, Springer Verlag, New York, 1997. Google Scholar A. Diagne, G. Bastin and J. M. Coron, Lyapunov exponential stability of linear hyperbolic systems of balance laws, Preprint of the 18th IFAC World Congress, Milano (Italy) August 28-September 2, 2011. Google Scholar J. F. Gerbeau and B. Perthame, Derivation of viscous saint-venant system for laminar shallow water; numerical validation, Disc. Cont. Dyn. Syst., 1, (2001), 89-102. doi: 10.3934/dcdsb.2001.1.89. Google Scholar S. Kawashima, Large-time behaviour of solutions to hyperbolic-parabolic systems of conservation laws and applications, Proc. Roy. Soc. Edinburgh Sect. A, 106 (1987), 169-194. doi: 10.1017/S0308210500018308. Google Scholar H.-L. Li, J. Li and Z. Xin, Vanishing of vacuum states and blow-up phenomena of the compressible Navier-Stokes equations, Comm. Math. Phys., 281 (2008), 401-444. doi: 10.1007/s00220-008-0495-4. Google Scholar R. Lian, Z. Guo and H.-L. Li, Dynamical behaviors for 1D compressibe Navier-Stokes equations with density-dependent viscosity, J. Differential Equations, 248 (2010), 1926-1954. doi: 10.1016/j.jde.2009.11.029. Google Scholar P. L. Lions, Topics in Fluids Mechanics, Vol. 1 and 2, Oxford Lectures Series in Math. and its Appl., Oxford 1996 and 1998. Google Scholar P. L. Lions, B. Perthame and E. Tadmor, Kinetic formulation of the isentropic gas dynamics and p-systems, Comm. Math. Phys., 163 (1994), 415-431. Google Scholar P. L Lions, B. Perthame and E. Tadmor, A kinetic formulation of multidimensional scalar conservation laws and related equations, J. Amer. Math. Soc., 7 (1994), 169-191. doi: 10.2307/2152725. Google Scholar G. Lyng and K. Zumbrun, One-dimensional stability of viscous strong detonation waves, Ration. Mech. Anal., 173 (2004), 213-277. doi: 10.1007/s00205-004-0317-6. Google Scholar C. Mascia, A dive into shallow water, Riv. Mat. Univ. Parma, 1 (2010), 77-149. Google Scholar C. Mascia and F. Rousset, Asymptotic stability of steady-states for saint-venant equations with real viscosity, in Analysis and simulation of fluid dynamics, (2007), 155-162, Adv. Math. Fluid Mech., Birkhauser, Basel. doi: 10.1007/978-3-7643-7742-7_9. Google Scholar C. Mascia and K. Zumbrun, Stability of large-amplitude viscous shock profiles of hyperbolic-parabolic systems, Arch. Ration. Mech. Anal., 172 (2004), 93-131. doi: 10.1007/s00205-003-0293-2. Google Scholar C. Mascia and K. Zumbrun, Stability of viscous shock profiles for dissipative symmetric hyperbolic-parabolic systems, Comm. Pure Appl. Math., 52 (2004), 841-876. doi: 10.1002/cpa.20023. Google Scholar J.C. Barré De Saint-Venant, Théorie du mouvement non permanent des eaux, avec application aux crues des riviéres et á l'introduction des marées dans leur lit, C. R. Acad. Sci. Paris Sér. I Math., 73 (1871), 147-154. Google Scholar W. Wang and C.J. Xu, The Cauchy problem for viscous Shallow Water flows, Rev. Mate. Iber., 21 (2005), 1-24. doi: 10.4171/RMI/412. Google Scholar Jean-Frédéric Gerbeau, Benoit Perthame. Derivation of viscous Saint-Venant system for laminar shallow water; Numerical validation. Discrete & Continuous Dynamical Systems - B, 2001, 1 (1) : 89-102. doi: 10.3934/dcdsb.2001.1.89 Marie-Odile Bristeau, Jacques Sainte-Marie. Derivation of a non-hydrostatic shallow water model; Comparison with Saint-Venant and Boussinesq systems. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 733-759. doi: 10.3934/dcdsb.2008.10.733 Hassen Arfaoui, Faker Ben Belgacem, Henda El Fekih, Jean-Pierre Raymond. Boundary stabilizability of the linearized viscous Saint-Venant system. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 491-511. doi: 10.3934/dcdsb.2011.15.491 Emmanuel Audusse, Fayssal Benkhaldoun, Jacques Sainte-Marie, Mohammed Seaid. Multilayer Saint-Venant equations over movable beds. Discrete & Continuous Dynamical Systems - B, 2011, 15 (4) : 917-934. doi: 10.3934/dcdsb.2011.15.917 Georges Bastin, Jean-Michel Coron, Brigitte d'Andréa-Novel. On Lyapunov stability of linearised Saint-Venant equations for a sloping channel. Networks & Heterogeneous Media, 2009, 4 (2) : 177-187. doi: 10.3934/nhm.2009.4.177 Tohru Nakamura, Shinya Nishibata, Naoto Usami. Convergence rate of solutions towards the stationary solutions to symmetric hyperbolic-parabolic systems in half space. Kinetic & Related Models, 2018, 11 (4) : 757-793. doi: 10.3934/krm.2018031 E. Audusse. A multilayer Saint-Venant model: Derivation and numerical validation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 189-214. doi: 10.3934/dcdsb.2005.5.189 Kun Li, Jianhua Huang, Xiong Li. Traveling wave solutions in advection hyperbolic-parabolic system with nonlocal delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2091-2119. doi: 10.3934/dcdsb.2018227 Majid Bani-Yaghoub, Chunhua Ou, Guangming Yao. Delay-induced instabilities of stationary solutions in a single species nonlocal hyperbolic-parabolic population model. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2509-2535. doi: 10.3934/dcdss.2020195 Francesca R. Guarguaglini. Global solutions for a chemotaxis hyperbolic-parabolic system on networks with nonhomogeneous boundary conditions. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1057-1087. doi: 10.3934/cpaa.2020049 Tohru Nakamura, Shinya Nishibata. Energy estimate for a linear symmetric hyperbolic-parabolic system in half line. Kinetic & Related Models, 2013, 6 (4) : 883-892. doi: 10.3934/krm.2013.6.883 Bopeng Rao, Xu Zhang. Frequency domain approach to decay rates for a coupled hyperbolic-parabolic system. Communications on Pure & Applied Analysis, 2021, 20 (7&8) : 2789-2809. doi: 10.3934/cpaa.2021119 Yanni Zeng. LP decay for general hyperbolic-parabolic systems of balance laws. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 363-396. doi: 10.3934/dcds.2018018 M. Grasselli, Hana Petzeltová, Giulio Schimperna. Convergence to stationary solutions for a parabolic-hyperbolic phase-field system. Communications on Pure & Applied Analysis, 2006, 5 (4) : 827-838. doi: 10.3934/cpaa.2006.5.827 Huashui Zhan. On a hyperbolic-parabolic mixed type equation. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 605-624. doi: 10.3934/dcdss.2017030 Xiaoping Zhai, Hailong Ye. On global large energy solutions to the viscous shallow water equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4277-4293. doi: 10.3934/dcdsb.2020097 Ying Yang. Global classical solutions to two-dimensional chemotaxis-shallow water system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2625-2643. doi: 10.3934/dcdsb.2020198 Nora Aïssiouene, Marie-Odile Bristeau, Edwige Godlewski, Jacques Sainte-Marie. A combined finite volume - finite element scheme for a dispersive shallow water system. Networks & Heterogeneous Media, 2016, 11 (1) : 1-27. doi: 10.3934/nhm.2016.11.1 Qiaoyi Hu, Zhixin Wu, Yumei Sun. Liouville theorems for periodic two-component shallow water systems. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 3085-3097. doi: 10.3934/dcds.2018134 Anna Geyer, Ronald Quirchmayr. Traveling wave solutions of a highly nonlinear shallow water equation. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1567-1604. doi: 10.3934/dcds.2018065 PDF downloads (97) Marta Strani
CommonCrawl
Utility of mosquito surveillance data for spatial prioritization of vector control against dengue viruses in three Brazilian cities Kim M Pepin1,2,3, Clint B Leach3, Cecilia Marques-Toledo4, Karla H Laass5, Kelly S Paixao5, Angela D Luis1,3,8, David TS Hayman3,6,10, Nels G Johnson3, Michael G Buhnerkempe3,9, Scott Carver3,7, Daniel A Grear3, Kimberly Tsao3, Alvaro E Eiras5 & Colleen T Webb1,3 Parasites & Vectors volume 8, Article number: 98 (2015) Cite this article Vector control remains the primary defense against dengue fever. Its success relies on the assumption that vector density is related to disease transmission. Two operational issues include the amount by which mosquito density should be reduced to minimize transmission and the spatio-temporal allotment of resources needed to reduce mosquito density in a cost-effective manner. Recently, a novel technology, MI-Dengue, was implemented city-wide in several Brazilian cities to provide real-time mosquito surveillance data for spatial prioritization of vector control resources. We sought to understand the role of city-wide mosquito density data in predicting disease incidence in order to provide guidance for prioritization of vector control work. We used hierarchical Bayesian regression modeling to examine the role of city-wide vector surveillance data in predicting human cases of dengue fever in space and time. We used four years of weekly surveillance data from Vitoria city, Brazil, to identify the best model structure. We tested effects of vector density, lagged case data and spatial connectivity. We investigated the generality of the best model using an additional year of data from Vitoria and two years of data from other Brazilian cities: Governador Valadares and Sete Lagoas. We found that city-wide, neighborhood-level averages of household vector density were a poor predictor of dengue-fever cases in the absence of accounting for interactions with human cases. Effects of city-wide spatial patterns were stronger than within-neighborhood or nearest-neighborhood effects. Readily available proxies of spatial relationships between human cases, such as economic status, population density or between-neighborhood roadway distance, did not explain spatial patterns in cases better than unweighted global effects. For spatial prioritization of vector controls, city-wide spatial effects should be given more weight than within-neighborhood or nearest-neighborhood connections, in order to minimize city-wide cases of dengue fever. More research is needed to determine which data could best inform city-wide connectivity. Once these data become available, MI-dengue may be even more effective if vector control is spatially prioritized by considering city-wide connectivity between cases together with information on the location of mosquito density and infected mosquitos. Understanding the relationship between Aedes aegypti vectors and the patterns of dengue fever they cause is important in the design of vector-based disease control strategies. Because it is often not feasible or possible to eradicate the mosquito vectors [1], quantitative knowledge of how vector density relates to disease incidence is essential for deciding how much vector populations need to be reduced in order to decrease disease incidence adequately. Mechanistic knowledge of transmission is also important because methods of vector control that are designed based on perceived spatial patterns of cases are often not effective [1,2]. Identifying how vectors are connected to disease incidence in space and time would allow for more cost-effective strategies of implementing vector controls. The strength and direction of the relationship between mosquito density and dengue infection varies depending on the spatial scale at which data are collected and community characteristics [3-8]. For example, when comparing adult vector densities with prevalence of human infections across three sets of community conditions (urban, suburban, slum) within Rio de Janeiro, Brazil, Honorio et al. [3] found higher infection prevalence in the slum where vector density was lowest. This negative relationship was hypothesized to be because living conditions in the slum facilitated greater rates of vector-human contact relative to the highly developed urban area. At the household scale, no relationship between vector density and disease prevalence was found [3], although it was acknowledged that larger numbers of infections are required at this scale before appropriate conclusions can be drawn. On the other hand, in rural villages in Thailand, a non-significant but positive trend in the relationship between adult vector density and child infection prevalence was found at the household and between-house levels [7]. Considering that within- and between-house transmission have been shown to be important [9], the weak relationship between adult vector density and human infections at the household level is surprising. One potential explanation for the weak relationship is sampling – the number of replicate samples in space and time, or techniques used for vector collection, may not be adequate for estimating household mosquito density at a level of precision that is smaller than the ecologically-determined variation in vector density. A second explanation could be human movement [10] – human contact patterns at different spatial scales (local and long-distance) can explain spatial dengue transmission [9,11], highlighting that movement at multiple spatial scales is important to consider when linking vector densities to human cases. Theoretical work has demonstrated that the rate of within-city transmission of dengue virus depends on the type of human movements: regular movement patterns due to commuting patterns, for example, can slow the rate of disease spread by up to 25% in comparison to random movement patterns [12]. In contrast, temporally unstructured movements, such as those found in resource-poor settings, can increase the size of an epidemic by up to 20% [13]. Thus, consideration of human movement at different spatial scales is important for understanding how mosquito density data can be used for targeting vector controls. Several cities in Brazil have implemented a city-wide mosquito trapping system, MI-Dengue, which monitors weekly prevalence of gravid Ae. aegypti and Ae. Albopictus city-wide in real time [14-16]. Traps are associated with households and spaced in a grid-like manner at ~200-300 m intervals, depending on the city. Vector density data are automatically available for control personnel who respond by focusing source reduction, larvicide and, more rarely, adulticide activities to neighborhood blocks with high mosquito density. The MI-dengue system – based mainly on the idea that spatially targeting areas with higher densities of gravid female mosquitos will decrease case loads using fewer resources – has been shown to be effective and cost-effective for reducing human infections [16]. It has been demonstrated that confirmed cases in humans cluster with high mosquito density in space and time [14], but rigorous quantitative analyses that identify how to best use the surveillance data have not been conducted. Although information on infected mosquitos and confirmed cases in humans are given the highest weight in spatial prioritization of vector control, these data are rarer and often not available until well after transmission has occurred, emphasizing the importance of identifying the best method of using mosquito density data in spatial prioritization of vector control. While experiments to determine appropriate spatial scales for estimating vector density are still ongoing, the available data are numerous (~5,726 - 43,467 mosquitoes surveyed annually per city) and could reveal useful insight on the spatio-temporal relationship between vector densities and human cases within entire cities. Here, we sought to better understand the city-wide relationship between vector densities and human cases to provide further guidance for spatially targeting vector control work. Our analysis has the following four aims, to: 1) quantify the relative role of city-wide mosquito surveillance data in predicting city-wide cases of dengue, 2) identify the spatial scale at which case data from other neighborhoods are important, 3) identify whether readily available data related to urban characteristics can be used to approximate spatial patterns of human cases, and 4) understand how city-wide mosquito surveillance data can be used to spatially prioritize vector control activities in order to have the maximum effect on preventing cases of dengue fever. We base our analyses on data from Vitoria city, Brazil, because it had the longest time series of surveillance data (~5 years), but we use data from two other cities for validation of model structure and a deeper understanding of model parameters. Study site Models were developed using data from Vitoria city, Brazil, an economically prosperous coastal city that is the largest city (348,265 inhabitants) in the state of Espirito Santo in southeastern Brazil. Among the 27 major cities in Brazil, Vitoria has the 4th highest human development index (HDI; 0.85), the highest gross domestic product per capita and an unemployment rate of 7.25% (Brazilian Institute of Statistics and Geography, 2010 Census). The climate is tropical with an annual mean temperature of 23°C and a rainy season between October and January (National Institute of Meteorology, Brazil). Due to its prosperity, size and port capabilities, there is frequent movement of people and merchandise to and from nearby and more distant cities that are less developed. Data from two other cities, Governador Valadares (GV; population 263,594) and Sete Lagoas (SL; population 208,847), both in the state of Minas Gerais, Brazil, were used for model testing and validation. Both cities have a history of dengue fever outbreaks and are similarly economically prosperous with HDI and unemployment rates of 0.77 and 6.8% (GV), and 0.76 and 6.8% (SL) (Brazilian Institute of Statistics and Geography, 2010 Census). The river Doce bisects GV acting as a gateway between major marine ports. Annual mean temperatures are 24.6°C (GV) and 20.9°C (SL), with a rainy season between October and March (National Institute of Meteorology, Brazil). Neighborhood-level population sizes, areas (Additional file 1, spreadsheet "Neighborhoods") and economic data were obtained from the 2010 census (mentioned above), from the local vector control managers and the Ministry of Health Secretaries. For Vitoria, economic values were the sum of the registered commercial (including industry and service) units for each neighborhood. For GV and SL, neighborhood economic data were either the number of registered residences or commercial units per neighborhood. Case data Notified cases of dengue fever were obtained from each city's Ministry of Health Secretary's official database, which lists dengue-fever cases by their residential address and date of first symptoms. In Brazil, dengue is a mandatory notifiable disease and thus the database represents all cases where any kind of medical care was sought. However, only samples at the start of an epidemic are validated for the presence of dengue virus. Once an epidemic is deemed started, most other cases are diagnosed symptomatically, such that consistent serotype information is unavailable. Although neighborhood assignments were complete, street address information was often lacking, thus we aggregated the case data to the neighborhood level - a political boundary defined by the city. The numbers of neighborhoods in each city were: Vitoria – 75, GV– 65, SL – 98. Neighborhood population sizes and areas were variable both within and between cities (mean ± 2 standard errors for population sizes and areas in km2 were: Vitoria – 4,080 ± 1,614, 0.47 ± 0.11; GV – 3,435 ± 862, 3.14 ± 4.69; SL – 2,000 ± 327, 0.37 ± 0.06; Additional file 1: Table S1). We summed the cases in weekly intervals to match the temporal scale of the mosquito data. Mosquito surveillance data Mosquito data were obtained from a city-wide surveillance system (MI-Dengue) [15] managed by the company, Ecovec, which originated from an academic setting and is located in Belo Horizonte, MG, Brazil. The system is comprised of a network of sticky traps, called MosquiTRAP, which have been extensively tested and described elsewhere [17-21]. Briefly, traps are placed in a lattice throughout the entire city. Each trap is checked weekly for mosquitos, which are identified to species level. The data are entered by cell phones to a database that automatically generates maps of mosquito density for control personnel, who target control to highly infested areas. We obtained weekly counts of the gravid female Ae. aegypti (93.2– 98.4% of all mosquitos depending on city) and Ae. albopictus species, the primary and secondary vectors of dengue fever. Because each trap was located on the inside or outside of a residence, we expressed the mosquito data as average household mosquito density per neighborhood (mosquitos/traps per neighborhood per week; 18.6 traps/neighborhood on average) to match the spatial scale of the available case data. Using an average household abundance estimate also has the advantage of reducing the uncertainty in household mosquito density compared with using single-replicate trap-level counts for each time point. Mean number of mosquitos and traps counted per week across the three cities were: Vitoria – 716.8 ± 342.5 standard deviation (SD) and 1391.6 ± 32.0 SD, respectively; GV – 212.5 ± 81.8 SD and 373.0 ± 50.4 SD, respectively; SL – 95.4 ± 72.3 SD and 411.2 ± 123.8 SD, respectively (Table S1).The area monitored per city was: 33 km2 (Vitoria), 27 km2 (GV) and 31 km2 (SL), which yields mean weekly trap monitoring densities of 42.2, 13.8 and 13.3 traps per km2 in the three cities respectively (Additional file 1, Spreadsheet "Traps"). In all three cities, routine vector control occurs following guidelines of the Brazilian Dengue Control Program. This includes mainly larvacide and source reduction activities that occur systematically (moving from block to block) throughout each city year-round. In addition to these activities, adulticide is conducted in blocks where high numbers of mosquitoes are identified, following the recommendations by Ecovec (www.ecovec.com). The effects of these controls, and other factors that affect mosquito populations such as weather, are implicit in the mosquito density data. Thus, although mosquito populations are altered by several biotic and abiotic factors, the mosquito surveillance data are a means of directly examining effects of mosquito density on disease incidence. Statistical model structure and parameter estimation Weekly cases of dengue fever in each neighborhood in Vitoria from Nov. 2007 through Dec. 2011 (4.17 years) were used first for model fitting. Data were modeled using a generalized linear mixed model with a Poisson error structure and log link. Differences between neighborhoods in population size are accounted for through an offset term. Random effects of neighborhood were included to account for within-neighborhood error correlations. The full model used for model selection was of the form: $$ \begin{array}{l}y\left(i,t\right)\sim Poisson\left[\lambda \left(i,t\right)\right],\\ {}\lambda \left(i,t\right)= \exp \left[Y\left(i,t\right)+ \log \left(P(i)\right)+\pi (i)\right],\\ {}\pi (i)\sim N\left[0,{\sigma}^2\right],\end{array} $$ where Y(i,t) is defined in Eqn. 3 (below), P is the neighborhood population size and π is the random effect of neighborhood. In order to compare the role of mosquito density data in prediction of case notifications at a larger spatial scale, an analogous general linear model with mosquito covariate data aggregated to the city level was analyzed. Note that in this model structure, connectivity between neighborhoods, random effects of neighborhoods and differences in neighborhood population sizes were irrelevant and thus the model structure reduces to a simple linear regression with a Poisson error structure as follows: $$ \begin{array}{l}y(t)\sim Poisson\left[\lambda (t)\right],\\ {}\lambda (t)= \exp \left[Y(t)\right],\end{array} $$ where Y(t) represents mean mosquito density in the entire city during week t. Approximate Bayesian inference by integrated nested Laplace approximations was used for parameter estimation. R software Version 3.0.1 and the package R-INLA (www.r-inla.org) were used to perform the analyses [22]. Description of covariates The importance and structure of spatial coupling between neighborhoods (a proxy for human movement) was examined as a main effect using a modified gravity model (described below). All covariate data were normalized in order to compare the strength of parameter estimates. A term for spatial autocorrelation was not included in the final models because it was not significant (according to a Moran's I test on shifted residuals) in preliminary fitting of gravity model terms. We also compared our models, which included a covariate-based exploration of the case data, with autoregressive lag 1 models (AR1) and found similar levels of predictive power (data not shown). Gravity models have been used effectively to explain the spatial spread of measles between cities in England [23]. The traditional gravity model assumes that movement between locations is a function of both population size and distance between populations. The concept is that areas with large population sizes act as disease sources by attracting susceptible hosts. The "force" of disease spread becomes less strong the further away hosts are from the large populations. This relationship works well for describing infection spread between cities [23,24], but human movement between neighborhoods within a city due to commuting, visiting friends or going to shopping areas [9,10] may not necessarily be correlated with population size and/or distance. Secondly, dengue is a vector-borne disease, meaning that the presence of vectors is required for transmission from the donor population. Thus, we used a modified version of a gravity model, incorporating effects of mosquito density and using additional neighborhood characteristics to describe spatial coupling. We were interested in testing whether these commonly available approximations could be useful for interpreting mosquito surveillance data in terms of human cases because direct measures of neighborhood connectivity are not usually available without time-consuming, expensive field studies. In the full model, the rate of case notification in neighborhood i at time t is: $$ {Y}_{i,t}={\beta}_1{M}_{i,t-x1}+{\beta}_2{Y}_{i,t-y1}+{\beta}_3{M}_{i,t-x1}{Y}_{i,t-y1}+{\beta}_4{\sum}_{\mathrm{j}}{M_{j,t-x2}}^{\alpha 1}+{\beta}_5{\sum}_{\mathrm{j}}{\left({Y_{j,t}}_{-y2}/f\left({x}_j\right)\right)}^{\alpha 2}+{\beta}_6\sum \mathrm{j}{\left({M}_{j,t-x2}{Y_{j,t}}_{-y2}/f\left({x}_{\mathrm{j}}\right)\right)}^{\alpha 3}, $$ where Y is the number of cases, M is the mosquito-trap prevalence, α is a scaling parameter, i and j denote neighborhoods (where i ≠ j), t is the weekly time step, x1, x2, y1 and y2 are time lags in weeks for mosquito and disease data at the within and between-neighborhood scales. f(x j ) is a proxy for neighborhood connectivity (i.e., a term for weighting case notifications in neighborhood j according to factors that could describe disease connections between neighborhoods, such human movement; Table 1). Distance was calculated in ArcGIS using road data, such that the distance between neighborhoods was proportional to the amount of travel time between neighborhood centroids (or centroid adjusted to the nearest road). Note that f(x j ) does not vary in time, which is an appropriate approximation since our time series is <5 years. Table 1 Candidate structures for the components of the f(x j ) The criteria used for model selection were Deviance Information Criterion (DIC; [25]) and the mean log Conditional Predictive Ordinates (mlCPO), which is analogous to leave-one-out cross-validation [26]. Lower DIC and mlCPO values indicate better predictive power of the model. Because the mlCPO showed the same rank order as a measure of explained variation (Spearman's r coefficient between the observed and model predicted data), we only present the DIC alongside r for simplicity. Due to the complexity with how the covariate data could impact human cases, model selection was conducted in several stages, broadly as follows: Selection of lags. For each possible covariate (as shown in Equation 1) individually, we identified the best lag time between it and the response variable (x1, x2, y1, y2, z1 and z2 in Equation 1). Lags were calculated as a 3-week average because we hypothesized that a window of time in the past may best explain the relationship (preliminary analyses confirmed this hypothesis). The 3-week window was chosen because 2–3 weeks is the combined amount of time from an infectious mosquito bite to a case report, on average [27]. This is simply the combination of average incubation periods in vectors and humans and assumes that an infectious vector would transmit immediately upon becoming infectious. Thus, lag 1 was the average of weeks 1 to 3 in the past. The longest lag we investigated was 18–20 weeks. Selection of scaling factors. Similar to previous work [23], we hypothesized a scaling factor on the gravity terms would be important because these covariates described interactions that could be non-linear. Because initial attempts to fit this parameter were unsuccessful due to the effects of its non-linearity on convergence, we identified the best scaling factor (α1-α3 in Eqn. 3) for each possible between-neighborhood covariate (Eqn. 3, last 3 covariates) by fitting models using a range of fixed scaling factors (α = 0.001, 0.01, 0.1, 0.5, 1, 2). These values were chosen because they represent a range of biologically realistic functions for the relationship between gravity components (concave-up, concave-down or linear). The lowest value (i.e., 0.0001) was chosen based on convergence to the lowest DIC (representing asymptotic behavior of the best value) and for values above the highest (i.e., 2) the DIC continued to increase in the exponential part of the curve (i.e., values > 2 did not produce good fits). Mosquito and human case terms. We compared models with only mosquito density data (M i,t-x1 and ∑j M j,t-x2 α1) to those with only human-case notifications (Y i,t-y1 and ∑j(Y j,t-y2/f(x j))α2), and those with both types of covariate data (i.e., Eqn. 3), to investigate the role of mosquito density data. Spatial scale of between neighborhood interactions. For the between-neighborhood effects, we compared two scales: 1) nearest-neighbor effects (i.e., local) – where only covariate data from immediately adjacent neighborhoods were used to predict cases and 2) global effects – where data from all other neighborhoods city-wide were used to predict cases. Proxies describing between-neighborhood weights. For the global between-neighborhood covariates, we compared different functions for weighting between-neighborhood effects (f(x j ) in Eqn. 1), including economic value (1/E j ), population density (1/D j ) and travel distance between neighborhoods (1/d ij ; Table 1). We hypothesized that high-economy or high-density neighborhoods would attract more people on a regular basis, creating hubs for disease transmission and spatial spread. Similarly, we hypothesized that disease transmission from other neighborhoods would be more likely between neighborhoods with faster road travel. These ideas are similar to a recent study showing that dengue hotspots occur along major roads and transportation hubs [28]. Because mosquitos rarely travel beyond 200 m [29], which is mainly within a neighborhood, the weightings were only applied to the terms with case notification data, β5 and β6 (Eqn. 3), and not the global mosquito term, β4 (Eqn. 3). Because Steps 1 and 2 were not the focus of our analysis, results from these analyses are presented in the Supplementary Material (Additional file 2: Figure S1, Additional file 3: Figure S2, Additional file 4: Figure S3, Additional file 5: Figure S4, Additional file 6: Figure S5, Additional file 7: Figure S6). Results from Steps 3–5 are reported in the main text. Model evaluation All steps were conducted using data from Vitoria from week 45 of 2007 through 2011, thus withholding data from 2012 for evaluation of the final model by out-of-sample prediction (i.e., forecasting). As a second means of model validation, we used the best model selected from Vitoria data on data from two other cities: GV and SL. For this, we re-estimated parameters using the best fit Vitoria-derived model structure from our model selection procedure and covariate data from each other city. Again, we only used a portion of the data for parameter estimation and predicted both this in-sample data as well as the remaining (out-of-sample) data. Because the magnitude and direction of parameter values in the three cities were so different, we did not attempt to predict data in the other two cities using parameters estimated from Vitoria covariate data. Instead, we compared the city-specific parameters. We also conducted Steps 1, 2, 3 and 5 (above) on data from GV and SL in order to evaluate the generality of conclusions drawn based on the Vitoria time series and to gain a better understanding of how the best model may differ due to city-specific circumstances. The latter two cities did not have as much data: GV in-sample – 90 weeks, GV out-of-sample – 30 weeks, SL in-sample – 86 weeks, and SL out-of-sample – 13 weeks. In-sample data were from 2009 and 2010 while out-of-sample data were from 2011. Role of mosquito data There was little visual correlation between weekly time series of mosquito data with human case data when considering the data across space or time (Figure 1). This lack of visual correlation is confirmed using a spatio-temporal Bayesian regression model that accounted for both within- and between-neighborhood effects of mosquito density (Figures 2 and 3). Models that included only lagged case data (without mosquito surveillance data) fit the observed case data much better (Figure 3). Only a very slight gain in fit over cases alone (r = 0.62 vs 0.63; Figure 3 and Table 2; for cases alone vs the full model, respectively) was obtained by considering the effects of an interaction between mosquito density and case notifications (Figure 2C, right – compare red bars to blue or grey bars), and this did not translate to increased forecasting ability (r = 0.50 vs 0.49; Figure 3 and Table 2; for cases alone vs the full model, respectively). Similarly, the mosquito surveillance data alone were only weak predictors of human cases in the other two cities (Figure 4) as well as at the city-level scale in Vitoria (Additional file 8: Figure S7 and Additional file 9: Figure S8). The difference in R 2 between the city-level (0.18; Additional file 9: Figure S8B) compared with the neighborhood-level (0.27; from r = 0.52; Figure 3A) spatial scale, highlights that accounting for neighborhood-level effects is important for linking mosquito density to case data. Trends in mosquito counts and human case data. (A) Neighborhood distribution of mosquito density (number of mosquitos/number of trap inspections) and prevalence of dengue (number of reported cases/neighborhood population size) between 2008–2012 in Vitoria. Mosquito density is correlated with the size of the black circles, total numbers of cases are indicated by the red shading of neighborhoods (darker color indicates more total cases). The white area in the middle is a steep mountain where no monitoring was conducted. The grey neighborhoods to the north were also not monitored. (B) The difference between weekly cases in Vitoria for each year relative to the average number of weekly cases from 2008–2012. Each line represents deviations from a different year as indicated in the legend. (C) Same as B but for the prevalence of mosquitoes. Model selection results from Vitoria data. Each bar represents the DIC (left-side plots) or Spearman's correlation coefficient between the model-predicted and observed data (right-side plots) for a given model. Each covariate was lagged and scaled to the best values (i.e., from model selection shown in Additional file 2: Figure S1 and Additional file 3: Figure S2). "Null" indicates a model with only the neighborhood population size as an offset and random effects of neighborhood. (A) Within-neighborhood effects. Lags are indicated. No scaling factors were used. Full i is M i,t-13 + Y i,t-1 + M i,t-13 * Y i,t-1 . (B) Comparison of nearest-neighbor (local) versus all between-neighborhood (global) effects. Lags and scales, respectively, were: 13, 0.5 (local, Mj), 12, 0.1 (global, Mj), 1, 0.5 (local and global Yj), 13 & 1, 0.5 (local M j *Y j ) and 1 & 1, 0.1 (local M j *Y j ). Structure of full j was ∑ j M j,t-x α1 + ∑ j Y j, t-y α2 + ∑ j (M j,t-z Y j,t-y ) α3. Full i covariates (as specified in panel A) were included in each model (i.e., there are 6 covariates in "Full j"). (C) Effects of the type of approximation (f(x j )) used for weighting global connectivity. Form used for f(x j ) is indicated under the bars (d, distance; E, economy; D, density). Full i covariates (as specified in panel A) were included in each model (i.e., red and blue bars have 4 covariates). Lags and scales, respectively, were: 1, 0.5 (Y j f(x j )), 1 & 1, 0.1 (M j Y j f(x j )). Structure of full j was ∑ j M j,t-12 0.1 + ∑ j(Y j,t-1 f(x j )) 0.5 + ∑ j (M j,t-1 Y j,t-1 f(x j )) 0.1. City-level summary of model fits for Vitoria. The best model with only mosquito covariates, M i and ∑ j M j , (A) is compared to the best model with only dengue-case covariates, Y i and ∑ j (Y j f(x j )) (B). Models were fitted using neighborhood-level data but aggregated to the city level for presentation. Goodness-of-fit was calculated as the Spearman's correlation (r) between the observed and model-predicted values for the fitted model ("In-sample", solid lines) and out-of-sample predictions (dashed lines). Correlation coefficients are presented for both the aggregated city-level data (main plots) and for the neighborhood-level results presented in the scatterplot insets. Observed data (black lines; solid: in-sample, dashed: out-of sample); model predictions (blue lines; solid: in-sample, dashed: out-of sample), 97.5% credible intervals (red shades: in-sample; pink shades: out-of-sample). (A) Y i,t = β 1 M i,t-13 + β 2 ∑ j M j,t-12 0.1 + log(P i ) + π i (B) Y i,t = β 1 Y i,t-1 + β 2 ∑ j (Y j,t-1 /d ij ) 0.5 + log(P i ) + π i ; only the best Y j term is presented (although they are all similar). For both models, the best lag and scale terms were included as indicated (lag terms are a mean from a 3-week window, e.g., 13 represents the mean for weeks 13–15). Y are cases, M are mosquitos, P is population size, π is the neighborhood random effect, t is the week, i is the target neighborhood and j are the sum of all other neighborhoods that are not i. Table 2 Goodness-of-fit for the best model selected using data from Vitoria Comparison of Vitoria model selection results with other cities. Each bar represents the relative DIC (left-side plots) or Spearman's correlation coefficient between the model-predicted and observed data (right-side plots) for a given model. Relative DIC is the model DIC divided by the DIC for the null model (model with only the offset and random effects of neighborhood). Each covariate was lagged and scaled using the best values (i.e., from model selection shown in Additional file 2: Figure S1, Additional file 3: Figure S2, Additional file 4: Figure S3, Additional file 5: Figure S4, Additional file 6: Figure S5, Additional file 7: Figure S6). (A) Within-neighborhood effects. Lags are indicated beneath the bars. Model structure is shown in the legend. No scaling factors were used. (B) Comparison of the type of approximation (f(x j )) used for weighting global connectivity on the case-notification terms (red). Form used for f(x j ) is indicated under the bars (1, random; d, distance; E, economy; C, commercial structures; R, residences; D, density). The global mosquito term is shown for comparison (black bar). Full i covariates were included in each model (i.e., there are 4 covariates). (C) Comparison of f(x j ) in the full models. Form used for f(x j ) is indicated under the bars (as in B). Full i covariates were included in each model (i.e., there are 6 covariates). Utility of proxies for weighting between-neighborhood case data To investigate how spatial dimensions may shape the relationship between mosquito density and human cases of dengue, we included different scales of spatial disease data (local versus global) at the neighborhood-level within Vitoria. The models that included global coupling performed better than those that only allowed for nearest-neighbor connections (Figure 2B). We also considered different factors that could explain patterns of city-wide human movement such as economic value of neighborhoods, population density or distance between them. The best neighborhood-level model for Vitoria was: $$ log\left({Y}_{i,t}\right)={\beta}_1{M}_{i,t-13}+{\beta}_2{Y}_{i,t-1}+{\beta}_3{M}_{i,t-1}{Y}_{i,t-1}+{\beta}_4{\sum}_{\mathrm{j}}{M_{j,t-5}}^{0.1}+{\beta}_5{\sum}_{\mathrm{j}}{\left({Y}_{j,t-1}/{d}_{ij}\right)}^{0.5}+{\beta}_6{\sum}_{\mathrm{j}}{\left({M}_{j,t-1}{Y_{j,t}}_{-1}/{d}_{ij}\right)}^{0.1} + log\left({P}_i\right)+{\pi_i}_{.} $$ Although the DIC score was lowest for this full, "best" model, the mlCPO's (data not shown) and r values were very similar for all proxies of neighborhood connectivity (Figure 2C). Thus, although the mlCPO's and r values followed the same rank order as the DIC values, the high similarity of r values from models with different proxies for neighborhood connectivity did not indicate biologically important differences between the models in any of the cities (Figure 4). In summary, we found that models including global between-neighborhood effects in addition to within-neighborhood effects performed best and that all 3 types of covariates (mosquito density, case notifications and the interaction of these two covariates), but specific proxies for weighting global connectivity were similar to one another. Generality of the vitoria model The general structure of the Vitoria model (including mosquito lags and scaling factors) fit the neighborhood-level data remarkably well in all three cities when the results were interpreted at the city-level (Figure 5, Table 2). For GV, the model also did very well at forecasting future data using parameters that were estimated on an earlier segment of data (Figure 5, Table 2). In Vitoria and SL, the model performed more poorly at forecasting but the forecasted portion of the time series included only a period of low disease prevalence (thus its ability to forecast an upcoming outbreak is unclear). At the neighborhood level, the model produced smaller differences between the observed and model-predicted values in Vitoria and GV relative to SL (Figure 5 (insets), Table 2). When model selection was conducted on GV and SL, models with different mosquito lags and weighting factors for between-neighborhood connectivity (relative to the Vitoria model) were best when considering DIC (Figure 4C, left – compare green bar to grey bars). However, the difference in explained variation and forecasting capability was almost indistinguishable (Figure 4C, right - compare green bar to grey bars), suggesting that the differences in DIC were not biologically important. City-level summary of full-model fits for three cities. Models were fitted using neighborhood-level data but aggregated to the city level for presentation. Observed data (black lines; solid: in-sample, dashed: out-of sample); model predictions (blue lines; solid: in-sample, dashed: out-of sample), 97.5% credible intervals (red shades: in-sample; pink shades: out-of-sample). Insets display the neighborhood-level fits (black points: in-sample data; pink points: out-of-sample data). Best full model from each city is presented. (A) Vitoria: Y i,t = β 1 M i,t-13 + β 2 Y i,t-1 + β 3 M i,t-1 Y i,t-1 + β 4∑j M j,t-5 0.1 + β 5∑ j (Y j,t-1 /d ij ) 0.5 + β 6∑j(M j,t-1 Y j,t-1 /d ij ) 0.1 + log(P i ) + π i. (B) GV: Y i,t = β 1 M i,t-2 + β 2 Y i,t-1 + β 3 M i,t-4 Y i,t-4 + β 4∑j M j,t-1 2 + β 5∑ j (Y j,t-1 D j ) 0.1 + β 6∑j(M j,t-1 Y j,t-1 D j ) 0.5 + log(P i ) + π i. (C) SL: Y i,t = β 1 M i,t-6 + β 2 Y i,t-1 + β 3 M i,t-1 Y i,t-1 + β 4∑j M j,t-6 0.1 + β 5∑ j (Y j,t-1 R j ) 0.001 + β 6∑j(M j,t-1 Y j,t-1 R j ) 0.1 + log(P i ) + π i.; notation is as in Figure 4. The best model for each city included quite different lag times between mosquito density and cases: 13–15 weeks for Vitoria, 1–3 or 2–4 weeks for GV and 6–8 weeks for SL (Additional file 3: Figure S2, Additional file 5: Figure S4 and Additional file 7: Figure S6). However, a lag of 1–3 weeks was always best for the case notification data. In all three cities, between-neighborhood effects were generally stronger than within-neighborhood effects (Figure 6). The strength and direction of mosquito density parameters shifted to some extent when case data were included in the model, although the changes were inconsistent across cities (Figure 6). However, when we compared the parameter values estimated using the Vitoria model to those estimated using the best models from GV and SL, which included different lags for mosquito data (Figure 6, Additional file 10: Figure S9), mosquito parameters show more similar directional effects across the cities. Thus, although the general structure of the Vitoria model may be a useful predictive tool, there are some quantitative differences between cities in the role of mosquito density in predicting cases. Credible intervals for each covariate in the mosquitoes-only models (A) and the full models (B). Vitoria (black), GV (blue), SL (red). Thick solid lines are covariates in the best model selected from Vitoria data: Y i,t = β 1 M i,t-13 + β 2 Y i,t-1 + β 3 Y i,t-1 M i,t-13 + β 4∑j M j,t-5 0.1 + β 5∑ j (Y j,t-1 /d ij ) 0.5 + β 6∑j(M j,t-5 * Y j,t-1 /d ij ) 0.1 + log(P i ) + p i. Thin dashed lines are for the best models from the other cities: GV: Yi,t = β 1 M i,t-2 + β 2 Y i,t-1 + β 3 M i,t-4 Y i,t-4 + β 4∑j M j,t-1 2 + β 5∑ j (Y j,t-1 D j ) 0.1 + β 6∑j(M j,t-1 Y j,t-1 D j ) 0.5 + log(P i ) + π i ; SL: Y i,t = β 1 M i,t-6 + β 2 Y i,t-1 + β 3 M i,t-1 Y i,t-1 + β 4∑j M j,t-6 0.1 + β 5∑ j (Y j,t-1 R j ) 0.001 + β 6∑j(M j,t-1 Y j,t-1 R j ) 0.1 + log(P i ) + π i. We found that even with city-wide household-level mosquito surveillance data, the relationship between mosquito density and cases is weak. Although MI-dengue has been effective at decreasing cases city-wide by basing spatial prioritization on within-neighborhood data on mosquito density and recent infections in humans [16], our results highlight that additional data may be useful for further improvements in preventing cases of dengue city-wide. Previous work has similarly found a weak [5-8] or even negative [3,4] relationship between household mosquito density and cases. Part of the reason for the obscured understanding of the role of mosquito densities can likely be attributed to high variation in vector competence across relatively fine spatial and temporal scales [30], emphasizing that surveillance for infected mosquitos should be prioritized. In fact, a strong relationship between the density of infected mosquitos and cases has been observed [8]. In our system, a new technology which monitors the density of infected mosquitoes by serotype, MI-Virus, was recently developed but has not been implemented long enough or city-wide in order for us to have evaluated the data in this study (although the information provided by MI-virus is already being used for spatial targeting of vector control where it is available). As city-wide MI-Virus data become available at the same spatial scale as the density estimates, analyses should be extended to include these data, which may lead to more accurate guidance for spatial prioritization of vector controls. Similarly, to extract the most information from the MI-virus data, it will be important to obtain data on human diagnoses at the level of serotype because the relationship between mosquito density and human cases depends on the interaction of serotype-specific pre-existing immunity and the prevalence of different serotypes [31,32]. Understanding the role of mosquito density in predicting human cases of dengue fever under any experimental design is complicated by sampling scale and variability. It is thought that the mosquito density required to sustain transmission is in fact very low [33]. If the sampling techniques used to enumerate mosquito density are too coarse to distinguish prevalence values around the transmission threshold, then it is possible that a sampling protocol with more replication, or a trapping technology that captures more mosquitos, is required. Studies that aim to determine the precision needed to distinguish low mosquito densities (i.e., near the transmission threshold) with adequate precision, such as mass trapping in enclosed mosquito populations of known sizes using various levels of replication and spatial arrangements, are needed to assess accuracy and precision of mosquito surveillance data. Likewise, better quantification of mosquito thresholds that permit transmission among humans is important for choosing appropriate trapping parameters. In Brazil, routine vector control occurs city-wide throughout the year following national vector control guidelines [34]. Very broadly, personnel move through entire cities, block by block, neighborhood by neighborhood, in a systematic manner over the course of several months, mainly applying larvicide and source reduction. Documentation of these efforts was too sparse to be included in our models, but we do not expect that they would have obscured our ability to quantify the relationship between mosquito density and cases because they target immature stages and our system quantifies gravid adult females at a weekly scale. Additional controls are spatially targeted based on mosquito surveillance data, dengue cases data and data on infected mosquitos when they are available. However, controls based on human cases are often too late to prevent transmission because suspected cases are not confirmed until 6–8 weeks after notification. In cities where MI-dengue surveillance is conducted, the additional control activities can be targeted to blocks with the highest mosquito densities (or blocks with infected mosquitos where MI-Virus data are available) very rapidly after the mosquito populations achieve high numbers because the longest time lag between trap checking is one week and data can be visualized on the on-line MI-dengue mapping system immediately after a trap is examined [15]. If MI-dengue-based vector-control work varies in intensity non-randomly, as is likely the case due to variability in efficacy that depends on urban structures, and some blocks are responsible for more transmission than others, the combined effect could be a weak relationship between mosquito density and human cases. Moreover, the MI-dengue-based vector-control activities could explain the different best fitting lags for mosquito data among cities, if for example, in some cities the lag between transmission and available data/response is consistently longer than in other cities. Better documentation of the timing, intensity and effectiveness of vector control work in response to MI-dengue surveillance data is needed to investigate how these activities may affect interpretation of how to use mosquito density data for strategic planning of vector control work. Although models including only mosquito data performed more poorly than those containing only case-notification data, the interaction between mosquito density and case notifications was strongly significant in all three cities. Thus, consideration of the mosquito-human interaction is important in order to more accurately predict cases in space and time. Theoretical work has similarly found a low correlation between R 0 (the average number of secondary cases in a naïve population) and mosquito density within an area due to human movement [10]. Also, when the mosquito population is highly heterogeneous, frequent travel to areas with high mosquito density can cause an epidemic or sustain low levels of transmission (depending on connectivity levels) [35], providing mechanistic insight into why mosquito density alone may not be a good predictor of human cases. The importance of between-neighborhood effects in our models suggests that movement among neighborhoods is an important driver of dengue dynamics and that the neighborhood scale, given appropriate movement data, may be effective at capturing mosquito-human interactions. We found that global between-neighborhood effects were stronger than either nearest-neighbor effects or within-neighborhood effects, suggesting that many infections occurred distant from the home neighborhood. Our finding that non-local effects within a city impact spatial dynamics is similar to previous work where significant spatio-temporal clustering occurs at distances up to 2.8 km [36] and where 34.7% of cases did not show any spatio-temporal clustering [11]. However, the stronger role of non-local relative to local spatial coupling in our study contrasts the finding that house-to-house human movement may predominantly drive spatial spread [9]. This discrepancy in the relative role case data from further distances may at least partly be due to differences in urban characteristics and human behavior. While our results show that city-wide cases impact how mosquito density translates to human cases, we were not able to understand its mechanistic nature more fully given the available data. We hypothesized that economic values, population densities or travel time on roads may be good approximations to commuting patterns, but weighting between-neighborhood effects by these metrics did not explain significantly more variation than in unweighted mixing between neighborhoods. This may be because when the force of infection is high in several neighborhoods simultaneously, the probability of contact (and hence transmission) is increased to most neighborhoods, thus diluting the role of more specific patterns of connectivity (similar idea to theoretical work showing that high rates of movement increase overall transmission [35]). However, because of the importance of the term for the interaction between mosquito density and cases, it is possible that a more direct measure of neighborhood connectivity (e.g. measurements of between-neighborhood human movement) would improve the predictive power of our model by making more accurate spatial predictions when transmission rates are lower. Our analysis showed that the best model (as determined using Vitoria data) performed quite well at multiple tests of predictive power: forecasting future data in Vitoria as well as prediction of in- and out-of-sample data in two additional cities. This emphasizes that the general structure of the Vitoria model is a useful framework for quantifying different scales of spatial coupling in different cities. However, because the operational scale of vector control is the city block, using our model structure with block-level case and mosquito surveillance data will be most useful for directing operational work spatially. We expected that the lag-time between mosquito density and human cases would approximate the virus life cycle (i.e., extrinsic incubation period + search time + intrinsic incubation period). While this was true for GV and SL (2 and 6 week lags), for Vitoria, the strongest signal was at a 13-week lag (although a strong signal was also observed at 4 weeks). The difference between cities in the most significant lag time between mosquito density and cases could be due to differences in the temporal patterns of vector control work (i.e., variable resources over time), the relative emphasis of different types of control (i.e., response-based versus prevention-based), or the total amount of resources available to conduct vector control (i.e., ability to respond to some versus many high-risk sites). We attempted to investigate these factors using vector control data from the 3 cities but we discovered that much of the control work was unrecorded. A study that includes a standardized method for recording the dates, times, location, type and amount of vector control - alongside MI-dengue surveillance - will be instrumental in interpreting the effects of control on the lag between mosquito density and human cases, and ultimately on reducing uncertainty on how to spatially prioritize vector control work. The strong predictive ability of the case data alone shows that reasonable quantitative neighborhood-level predictions, especially with regards to the timing and magnitude of outbreaks, can be made from case notification data in the absence of mosquito surveillance data. Additionally, although results from Vitoria are based on almost 5 years of weekly data, similarly good fits and forecasts were possible in GV where less than 2 years of weekly data were available. However, the best forecasts were from models that included only space-time autocorrelation, instead of biological covariates (models not presented here). Thus, if the interest is in prediction for response planning, a non-mechanistic saturated model based on autocorrelation is likely to be the best approach. We did not present these models because our interest was in gaining an understanding of the relative role of biological factors and their spatial scales. Furthermore, the case data are often not available until about 6–8 weeks after diagnosis, which is why it is important to explore the utility of other data sources that may be available sooner. A mechanistic understanding of how mosquito density maps to disease transmission among humans is crucial for the development of quantitative tools that could guide spatial prioritization of vector control [10,35,37]. Despite the demonstrated efficacy of MI-dengue at preventing cases of dengue fever in several cities [16], our current work emphasizes that even further case reductions may be achieved if spatial prioritization occurred by additionally considering city-wide neighborhood connectivity – i.e., prioritizing highly connected areas with high mosquito density. As we did not find that readily available proxies of neighborhood connectivity explained spatial coupling, direct measures of city-wide connectivity (e.g., space use by humans [13]) seem important for maximizing the preventative utility of mosquito surveillance data. Once these data are available, they can be used to identify which areas with high densities of mosquitos are most critical for targeting vector control in order to minimize transmission of dengue viruses among humans. A complimentary approach is to develop a spatially-explicit disease dynamic model that could be used to estimate city-wide connectivity, identify transmission hotspots and identify strategies of vector control that minimize city-wide cases. These are the goals of our ongoing research. Future research should also include city-wide MI-virus data as they become available. Ideally, case data should be collected at the block level, the operational unit, and serotype-specific case data are important for a better understanding of how to employ mosquito density data for spatial prioritization of vector control. Morrison AC, Zielinski-Gutierrez E, Scott TW, Rosenberg R. Defining challenges and proposing solutions for control of the virus vector Aedes aegypti. PLoS Med. 2008;5(3):e68. Huy R, Buchy P, Conan A, Ngan C, Ong S, Ali R, et al. National dengue surveillance in Cambodia 1980–2008: epidemiological and virological trends and the impact of vector control. B World Health Organ. 2010;88(9):650–7. Honorio NA, Nogueira RMR, Codeco CT, Carvalho MS, Cruz OG, Magalhaes M, et al. Spatial evaluation and modeling of dengue seroprevalence and vector density in Rio de Janeiro, Brazil. Plos Neglect Trop Dis. 2009;3(11):e545. Lin CH, Wen TH. Using Geographically Weighted Regression (GWR) to explore spatial varying relationships of immature mosquitoes and human densities with the incidence of dengue. Int J Environ Res Public Health. 2011;8(7):2798–815. Scott TW, Morrison AC. Vector dynamics and transmission of dengue virus: implications for dengue surveillance and prevention strategies vector dynamics and dengue prevention. In: Rothman AL, editor. Dengue virus. 3382010th ed. 2010. p. 115–28. Thammapalo S, Nagao Y, Sakamoto W, Saengtharatip S, Tsujitani M, Nakamura Y, et al. Relationship between transmission intensity and incidence of dengue hemorrhagic fever in Thailand. Plos Neglect Trop Dis. 2008;2(7):e263. Mammen MP, Pimgate C, Koenraadt CJ, Rothman AL, Aldstadt J, Nisalak A, et al. Spatial and temporal clustering of dengue virus transmission in Thai villages. PLoS Med. 2008;5(11):e205. Yoon IK, Getis A, Aldstadt J, Rothman AL, Tannitisupawong D, Koenraadt CJ, et al. Fine scale spatiotemporal clustering of dengue virus transmission in children and Aedes aegypti in rural Thai villages. PLoS Negl Trop Dis. 2012;6(7):e1730. Stoddard ST, Forshey BM, Morrison AC, Paz-Soldan VA, Vazquez-Prokopec GM, Astete H, et al. House-to-house human movement drives dengue virus transmission. Proc Natl Acad Sci U S A. 2013;110(3):994–9. Stoddard ST, Morrison AC, Vazquez-Prokopec GM, Paz Soldan V, Kochel TJ, Kitron U, et al. The role of human movement in the transmission of vector-borne pathogens. PLoS Negl Trop Dis. 2009;3(7):e481. Vazquez-Prokopec GM, Kitron U, Montgomery B, Horne P, Ritchie SA. Quantifying the spatial dimension of dengue virus epidemic spread within a tropical urban environment. PLoS Negl Trop Dis. 2010;4(12):e920. Danon L, House T, Keeling MJ. The role of routine versus random movements on the spread of disease in Great Britain. Epidemics. 2009;1(4):250–8. Vazquez-Prokopec GM, Bisanzio D, Stoddard ST, Paz-Soldan V, Morrison AC, Elder JP, et al. Using GPS technology to quantify human mobility, dynamic contacts and infectious disease dynamics in a resource-poor urban environment. PLoS One. 2013;8(4):e58802. De Melo DPO, Scherrer LR, Eiras AE. Dengue fever occurrence and vector detection by larval survey, ovitrap and MosquiTRAP: a space-time clusters analysis. PLoS One. 2012;7(7):e42125. Eiras AE, Resende MC. Preliminary evaluation of the "Dengue-MI" technology for Aedes aegypti monitoring and control. Cad Saude Publica. 2009;25:S45–58. Pepin KM, Marques-Toledo C, Scherer L, Morais MM, Ellis B, Eiras AE. Cost-effectiveness of novel system of mosquito surveillance and control, Brazil. Emerg Infect Dis. 2013;19(4):542–50. de Resende MC, de Azara TMF, Costa IO, Heringer LC, de Andrade MR, Acebal JL, et al. Field optimisation of MosquiTRAP sampling for monitoring Aedes aegypti Linnaeus (Diptera: Culicidae). Mem I Oswaldo Cruz. 2012;107(3):294–302. Honorio NA, Codeco CT, Alvis FC, Magalhaes M, Lourenco-De-Oliveira R. Temporal distribution of Aedes aegypti in different districts of Rio De Janeiro, Brazil, measured by two types of traps. J Med Entomol. 2009;46(5):1001–14. Lourenco-de-Oliveira R, Lima JBP, Peres R, Alves FD, Eiras AE, Codeco CT. Comparison of different uses of adult traps and ovitraps for assessing dengue vector infestation in endemic areas. J Am Mosquito Contr. 2008;24(3):387–92. Maciel-de-Freitas R, Peres RC, Alves F, Brandolini MB. Mosquito traps designed to capture Aedes aegypti (Diptera: Culicidae) females: preliminary comparison of Adultrap, MosquiTRAP and backpack aspirator efficiency in a dengue-endemic area of Brazil. Mem I Oswaldo Cruz. 2008;103(6):602–5. Favaro EA, Mondini A, Dibo MR, Barbosa AAC, Eiras AE, Neto FC. Assessment of entomological indicators of Aedes aegypti (L.) from adult and egg collections in Sao Paulo, Brazil. J Vector Ecol. 2008;33(1):8–16. Rue H, Martino S, Chopkin N. Approximate bayesian inference for latent gaussian models using integrated nested laplace approximations. J Roy Stat Soc B. 2009;71(2):319–92. Xia Y, Bjornstad ON, Grenfell BT. Measles metapopulation dynamics: a gravity model for epidemiological coupling and dynamics. Am Nat. 2004;164(2):267–81. Ferrari MJ, Grais RF, Bharti N, Conlan AJ, Bjornstad ON, Wolfson LJ, et al. The dynamics of measles in sub-Saharan Africa. Nature. 2008;451(7179):679–84. Spiegelhalter DJ, Best NG, Carlin BR, van der Linde A. Bayesian measures of model complexity and fit. J Roy Stat Soc B. 2002;64:583–616. Held L, Schrodle B, Rue H. Posterior and cross-validatory predictive checks: a comparison of MCMC and INLA. In: Kneib T, Tutz G, editors. Statistical modelling and regression structures. Berlin, Germany: Springer,verlag; 2010. Chan M, Johansson MA. The incubation periods of dengue viruses. PLoS One. 2012;7(11):e50972. Sharma KD, Mahabir RS, Curtin KM, Sutherland JM, Agard JB, Chadee DD. Exploratory space-time analysis of dengue incidence in Trinidad: a retrospective study using travel hubs as dispersal points, 1998–2004. Parasite Vector. 2014;7:341. David MR, Lourenco-de-Oliveira R, Maciel de Freitas R. Container productivity, daily survival rates and dispersal of Aedes aegypti mosquitoes in a high income dengue epidemic neighborhood of Rio de Janeiro: presumed influence of differential urban structure on mosquito biology. Mem I Oswaldo Cruz. 2009;104(6):927–32. Gonçalves CM, Melo FF, Bezerra JMT, Chaves BA, Silva BM, Silva LD, et al. Distinct variation in vector competence among nine field populations of Aedes aegypti from a Brazilian dengue-endemic risk city. Parasite Vector. 2014;7:320. Luo L, Liang HY, Hu YS, Liu WJ, Wang YL, Jing QL, et al. Epidemiological, virological, and entomological characteristics of dengue from 1978 to 2009 in Guangzhou, China. J Vector Ecol. 2012;37(1):230–40. Liebman KA, Stoddard ST, Morrison AC, Rocha C, Minnick S, Sihuincha M, et al. Spatial dimensions of dengue virus transmission across interepidemic and epidemic periods in Iquitos, Peru (1999–2003). Plos Neglect Trop Dis. 2012;6(2):e1472. Kuno G. Review of the factors modulating dengue transmission. Epidemiol Rev. 1995;17(2):321–35. Ministerio da saude SdVeS. Diretrizes Nacionais para a Prevenção e Controle de Epidemias de Dengue. Edited by épidemiológica DdV. http://www.dengue.pr.gov.br. Adams B, Kapan DD. Man bites mosquito: understanding the contribution of human movement to vector-borne disease dynamics. PLoS One. 2009;4(8):e6763. Rotela C, Fouque F, Lamfri M, Sabatier P, Introini V, Zaidenberg M, et al. Space-time analysis of the dengue spreading dynamics in the 2004 Tartagal outbreak, Northern Argentina. Acta Trop. 2007;103(1):1–13. Cosner C, Beier JC, Cantrell RS, Impoinvil D, Kapitanski L, Potts MD, et al. The effects of human movement on the persistence of vector-borne diseases. J Theor Biol. 2009;258(4):550–60. Thanks to the dengue control program managers in the cities of Vitoria, GV and SL, especially to André Capezzuto, José Batista and Maria Jose Lanza, respectively, for help in obtaining the data for analyses. KMP, ADL, DTSH and CTW were funded by the RAPIDD program of the Science and Technology Directorate, U.S. Department of Homeland Security, and the Fogarty International Center, NIH. KMP was also funded by USDA-APHIS-WS during the latter part of the study. AEE, CMT, KHL and KSP were funded by CNPq (Pronex-Dengue – grants # 550131/2010-8 and Doenças Neglenciadas # 404211/2012-7) and ICNT-Dengue. DTSH acknowledges the David H Smith postdoctoral fellowship for funding. SC acknowledges funding from the University of Tasmania Research Enhancement Grant Scheme (grant # C 20897). CBL was supported by the NSF Graduate Research Fellowship Program under grant DGE-1321845. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of USDA-APHIS-VS, USDA-APHIS-WS, DHS or NIH. Fogarty International Center, National Institute of Health, Bethesda, Maryland, 20892, USA Kim M Pepin , Angela D Luis & Colleen T Webb United States Department of Agriculture, National Wildlife Research Center, Wildlife Services, Animal and Plant Health Inspection Service, 4101 Laporte Ave, Fort Collins, CO, 80521, USA Department of Biology, Colorado State University, Fort Collins, Colorado, 80523, USA , Clint B Leach , David TS Hayman , Nels G Johnson , Michael G Buhnerkempe , Scott Carver , Daniel A Grear , Kimberly Tsao Ecovec S.A, Belo Horizonte, Minas Gerais, Brazil Cecilia Marques-Toledo Departamento de Parasitologia, Universidade Federal de Minas Gerais, Av. Pres. Antonio Carlos, 6627, Pampulha, Belo Horizonte, MG, Brazil Karla H Laass , Kelly S Paixao & Alvaro E Eiras Department of Biology, University of Florida, Gainesville, Florida, 32611, USA David TS Hayman School of Biological Sciences, University of Tasmania, Hobart, 7000, Australia Scott Carver Current address: Department of Wildlife Biology, College of Forestry and Conservation, University of Montana, Missoula, Montana, 59812, USA Angela D Luis Current address: Department of Ecology and Evolutionary Biology, University of California – Los Angeles, Los Angeles, California, 90095, USA Michael G Buhnerkempe Current address: EpiLab, Infectious Disease research Centre (IDReC), Hopkirk Research Institute, Institute of Veterinary, Animal and Biomedical Sciences, Massey University, Palmerston North, Manawatu, New Zealand Search for Kim M Pepin in: Search for Clint B Leach in: Search for Cecilia Marques-Toledo in: Search for Karla H Laass in: Search for Kelly S Paixao in: Search for Angela D Luis in: Search for David TS Hayman in: Search for Nels G Johnson in: Search for Michael G Buhnerkempe in: Search for Scott Carver in: Search for Daniel A Grear in: Search for Kimberly Tsao in: Search for Alvaro E Eiras in: Search for Colleen T Webb in: Corresponding authors Correspondence to Kim M Pepin or Alvaro E Eiras. Conception and design: KP, CL, AL, DH, MB, SC, DG, KT, AE, CW; Analyses: KP, CL, AL, DH, NJ, MB, SC, DG; Data collection and sampling design: CM, KL, KP, AE; Wrote the manuscript: KP; Edited the manuscript: KP, CL, CM, AL, DH, MB, SC, KT, AE, CW; Read and approved the manuscript: all. The "Traps" spreadsheet show the city-wide mosquito counts and number of traps monitored for each week in each city. The "Neighborhoods" spread sheet gives the population size and total area (km2) for each neighborhood in each city. Additional file 2: Figure S1. Model selection on scaling parameters for Vitoria. The covariate data describing between-neighborhood effects (Mjα, [Yjf(x)j]α and [ MjYjf(x)j]α) were scaled because these terms were much larger than those describing the within-neighborhood effects (Mi, Ii and Mi Ii). An initial attempt to fit the scaling parameters yielded lack of convergence, thus we conducted model selection on a range of pre-selected parameter values (α = 0.001, 0.01, 0.1, 0.5, 1, 2; indicated in the legend). Only single variable models were investigated (Model structure: log(yi,t) = Xj,tα + πi + log(Pi), where X is defined on the X-axis). (A) DIC (B) Spearman's correlation between observed and model-predicted data. Only results from the best lags are presented for each scaling factor (selected from the analysis shown in Additional file 3: Figure S2). M = mosquito density, Y = reported cases, d = distance, E = economic value, D = density, i = focal neighborhood, j = all other neighborhoods (i≠j). Model selection on covariate lags for Vitoria. Preliminary analyses showed that models with lower DIC's were obtained when weekly data were averaged over 3-week windows. Thus, covariate data for all analyses were 3-week averages from the week indicated on the X-axis to two weeks in the future (i.e., 1 indicates an average of weeks 1–3). Left-hand plots indicate DIC for each single-variable model (structure: log(yi,t) = X + πi + log(Pi); where X represents the covariate in the figure legend), at each lag indicated on the X-axis. Right-hand plots display Spearman's r for the same set of models. Only results from the best scaling factors (selected from the analysis shown in Additional file 2: Figure S1) are shown. (A) Mosquito-only covariates. (B) Case-only covariates. (C) Covariates with an interaction between mosquito density and cases. Black indicates within-neighborhood effects, red is nearest-neighbor between-neighborhood effects, blue is global between-neighborhood effects. Weighting terms are thin lines that are almost completely overlapping, showing that there was not much difference in the type of approximation used for weighting global connectivity. Model selection on scaling parameters for GV. The covariate data describing between-neighborhood effects (Mjα, [Yjf(x)j]α and [ MjYjf(x)j]α) were scaled because these terms were much larger than those describing the within-neighborhood effects (Mi, Ii and Mi Ii). An initial attempt to fit the scaling parameters yielded lack of convergence, thus we conducted model selection on a range of pre-selected parameter values (α = 0.001, 0.01, 0.1, 0.5, 1, 2; indicated in the legend). Only single variable models were investigated (Model structure: log(yi,t) = Xj,tα + πi + log(Pi), where X is defined on the X-axis). (A) DIC (B) Spearman's correlation between observed and model-predicted data. Only results from the best lags are presented for each scaling factor (selected from the analysis shown in Additional file 5: Figure S4). M = mosquito density, Y = reported cases, d = distance, C = number of commercial buildings, R = number of residences, D = density, i = focal neighborhood, j = all other neighborhoods (i≠j). Model selection on covariate lags for GV. Preliminary analyses showed that models with lower DIC's were obtained when weekly data were averaged over 3-week windows. Thus, covariate data for all analyses were 3-week averages from the week indicated on the X-axis to two weeks in the future (i.e., 1 indicates an average of weeks 1–3). Left-hand plots indicate DIC for each single-variable model (structure: log(yi,t) = X + πi + log(Pi); where X represents the covariate in the figure legend), at each lag indicated on the X-axis. Right-hand plots display Spearman's r for the same set of models. Only results from the best scaling factors (selected from the analysis shown in Additional file 4: Figure S3) are shown. (A) Mosquito-only covariates. (B) Case-only covariates. (C) Covariates with an interaction between mosquito density and cases. Black indicates within-neighborhood effects, blue is global between-neighborhood effects. Weighting terms are thin lines that are almost completely overlapping in some cases, showing that there was not much difference in the type of approximation used for weighting global connectivity. Model selection on scaling parameters for SL. The covariate data describing between-neighborhood effects (Mjα, [Yjf(x)j]α and [ MjYjf(x)j]α) were scaled because these terms were much larger than those describing the within-neighborhood effects (Mi, Ii and Mi Ii). An initial attempt to fit the scaling parameters yielded lack of convergence, thus we conducted model selection on a range of pre-selected parameter values (α = 0.001, 0.01, 0.1, 0.5, 1, 2; indicated in the legend). Only single variable models were investigated (Model structure: log(yi,t) = Xj,tα + πi + log(Pi), where X is defined on the X-axis). (A) DIC (B) Spearman's correlation between observed and model-predicted data. Only results from the best lags are presented for each scaling factor (selected from the analysis shown in Additional file 7: Figure S6). M = mosquito density, Y = reported cases, d = distance, C = number of commercial buildings, R = number of residences, D = density, i = focal neighborhood, j = all other neighborhoods (i≠j). Model selection on covariate lags for SL. Preliminary analyses showed that models with lower DIC's were obtained when weekly data were averaged over 3-week windows. Thus, covariate data for all analyses were 3-week averages from the week indicated on the X-axis to two weeks in the future (i.e., 1 indicates an average of weeks 1–3). Left-hand plots indicate DIC for each single-variable model (structure: log(yi,t) = X + πi + log(Pi); where X represents the covariate in the figure legend), at each lag indicated on the X-axis. Right-hand plots display Spearman's r for the same set of models. Only results from the best scaling factors (selected from the analysis shown in Additional file 6: Figure S5) are shown. (A) Mosquito-only covariates. (B) Case-only covariates. (C) Covariates with an interaction between mosquito density and cases. Black indicates within-neighborhood effects, blue is global between-neighborhood effects. Weighting terms are thin lines that are almost completely overlapping, showing that there was not much difference in the type of approximation used for weighting global connectivity. Model selection on covariate lags for data aggregated to the city-wide scale. X-axes show the 3-week lag windows. Only a single-variable model with the mosquito density data was fit for each lag. Model structure: log(yt) = Mt; note that there are no neighborhood random effects or offset in this model because data from each time step are the total cases and mosquito density for the entire city. (A) DIC. (B) R2 (as in simple linear regression). Predicted cases from the city-level mosquito density models. City-wide weekly cases are predicted from city-wide mosquito density data using a generalized linear model assuming a Poisson error structure and a log link. Parameter estimation was by INLA (same method used in the neighborhood-level models). Model selection was conducted on lags of mosquito density between 1 and 20 weeks prior to case reports. Mosquito density data were smoothed as three-week averages using a 1-week sliding window. (A) Lag of 1 to 3 weeks. (B) Lag of 13 to 15 weeks (shown to be the best by DIC and explained variation). Additional file 10: Figure S9. Performance of best models for GV (A) and SL (B). Spearman's r is indicated for the city-level and neighborhood-level fits for both predictions from the fitted model and forecasts from the model (i.e., of data that were not used in model selection). Best models were: (A) Yi,t = β1Mi,t-2 + β2Yi,t-1 + β3Mi,t-4Yi,t-4 + β4∑jMj,t-12 + β5∑j(Yj,t-1Dj) 0.1 + β6∑j(Mj,t-1Yj,t-1Dj) 0.5 + log(Pi) + πi and (B) Yi,t = β1Mi,t-6 + β2Yi,t-1 + β3Mi,t-1Yi,t-1 + β4∑jMj,t-60.1 + β5∑j(Yj,t-1Rj) 0.001 + β6∑j(Mj,t-1 Yj,t-1Rj) 0.1 + log(Pi) + πi. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Pepin, K.M., Leach, C.B., Marques-Toledo, C. et al. Utility of mosquito surveillance data for spatial prioritization of vector control against dengue viruses in three Brazilian cities. Parasites Vectors 8, 98 (2015) doi:10.1186/s13071-015-0659-y Vector density Mosquito-human interactions Gravity model
CommonCrawl
Marine Ecosystem Response to Nutrient Input Reduction in Jinhae Bay, South Korea Oh, Hyun-Taik;Lee, Won-Chan;Koo, Jun-Ho;Park, Sung-Eun;Hong, Sok-Jin;Jung, Rae-Hong;Park, Jong-Soo 819 We study on the dynamic interaction with a simulated physical-biological coupled model response to nutrient reduction scenario in Jinhae Bay. According to the low relative errors, high regression coefficients of COD and DIN, and realistic distribution in comparison to the observation, our coupled model could be applicable for assessing the marine ecosystem response to nutrient input reduction in Jinhae Bay. Due to the new construction and expansion of sewage treatment plant from our government, we reduce 50% nutrient inputs near Masan Bay and sewage treatment plant. COD achieves Level II in Korea standard of the water quality from the middle of the Masan Bay to all around Jinhae Bay except the inner Masan Bay remaining at Level III. When our experiment reduces 50% nutrient inputs near Masan Bay and Dukdong sewage treatment plant simultaneously, COD decreases to about 0.1-1.2 mg/L $(128^{\circ}30'{\sim}128^{\circ}40'\;E,\;35^{\circ}05'{\sim}35^{\circ}11'\;N)$. The COD from the middle of the Masan Bay to Jinhae Bay achieves Level II. Pilot Scale Assessment of DOC and THMs Removal in Conventional Water Treatment System Lee, Choong-Dae;Lee, Yoon-Jin 829 This research aims to investigate the behavior of organic matter that causes bacterial re-growth and the formation of disinfectant by-products such as THM in water treatment, and to optimize conditions for a more efficient and conventional water facility. THM removed 51 % and 12 % through coagulation/sedimentation and filtration using a selected conventional system. In this experiment, the removal ratio of DOC was highest at 68 % when the Gt value was 42,000 and lowest at 41 % when the Gt value was 30,000. 77-84 % of total DOC was removed during coagulation/sedimentation, and 15-23 % was removed during filtration. When Gt values were between 30,000 and 66,000, over 50 % of high molecular matter above 10 K during coagulation/sedimentation was removed. Turbidity removed 98 % when the G1 value was 66,000. As the Gt value increased, the turbidity removal ratio increased. Turbidity removed over 20 % during the filtration process. Analysis of Local Wind in Busan Metropolitan area According to Wind Sector Division - Part I : Coarse Division of Wind Sector using Meteorological Observation Data - Lee, Hwa-Woon;Jung, Woo-Sik;Leem, Heon-Ho;Lee, Kwi-Ok;Choi, Hyun-Jung;Ji, Hyo-Eun;Lee, Hyun-Ju;Sung, Kyoung-Hee;Do, Woo-Gon 835 In this study, climate analysis and wind sector division were conducted for a propriety assessment to determine the location of air quality monitoring sites in the Busan metropolitan area. The results based on the meteorological data$(2000{\sim}2004)$ indicated hat air temperature is strongly correlated between 9 atmospheric monitoring sites, while wind speed and direction are not. This is because wind is strongly affected by the surrounding terrain and the obstacles such as building and tree. in the next stage, we performed cluster analysis to divide wind sector over the Busan metropolitan area. The cluster analysis showed that the Busan metropolitan area is divided into 6 wind sectors. However 1 downtown and 2 suburbs an area covering significantly broad region in Busan are not divided into independent sectors, because of the absence of atmospheric monitoring site. As such, the Busan metropolitan area is finally divided into 9 sectors. Formation and Chemical Characteristics of Dewfall in 2005 at Busan Jeon, Byung-Il;Hwang, Yong-Sik;Park, Gwang-Soon 847 In order to understand chemical characteristics and formation of dewfall in Busan, we analysed monthly distribution of dewfall, and investigated its chemical composition of dewfall. This study used the modified teflon plate $(1m{\times}1m)$ at Jangyongsil science high school from June 2005 to October 2005. In order to estimate qualitatively water soluble components, IC, ICP and UV methods for water soluble ions are also used respectively. Dewfall amount of sampling periods (26 day) collected 1.29 mm. Distribution of water soluble ions in dewfall founded the highest concentration $(81.3{\mu}eq/{\ell}\;for\;NO_3^-,\;146.6{\mu}eq/{\ell}\;for\;SO_4^{2-},\;and\;114.3{\mu}eq/{\ell}\;for\;nss-SO_4^{2-})$ during the June. pH was the lowest by 5.12 June, and October (pH 6.68) by most high and average pH was 5.46. Monthly equivalent ratio of $[SO_4^{2-}]/[NO_3^-]$ showed the highest value (2.94) during the September, the lowest value (1.77) during the July, and the mean value was 3.45. Development for UV/TiO2 Photocatalytic Oxidation Indoor Air Compound Process Jeon, Bo-Kyung;Choi, Kum-Chan;Suh, Jeong-Min 855 This study introduces a method to eliminate formaldehyde and benzene, toluene from indoor air by means of a photocatalytic oxidation reaction. In the method introduced, for the good performance of the reaction, the effect and interactions of the $TiO_2$ catalyst and ultraviolet in photocatalytic degradation on the reaction area, dosages of catalysts, humidity and light should be precisely examined and controled. Experiments has been carried out under various intensities of UV light and initial concentrations of formaldehyde, benzene and toluene to investigate the removal efficiency of the pollutants. Reactors in the experiments consist of an annular type Pyrex glass flow reactor and an 11W germicidal lamp. Results of the experiments showed reduction of formaldehyde, benzene and toluene in ultraviolet $/TiO_2/$ activated carbon processes (photooxidation-photocatalytic oxidation-adsorption processes), from 98% to 90%, from 98% to 93% and from 99% to 97% respectively. Form the results we can get a conclusion that a ultraviolet/Tio2/activated carbon system used in the method introduced is a powerful one for th treatment of formaldehyde, benzene and toluene of indoor spaces. Application of Discrete Wavelet Transform for Detection of Long- and Short-Term Components in Real-Time TOC Data Jin, Young-Hoon;Park, Sung-Chun 865 Recently, Total Organic Carbon (TOC) which can be measured instantly can be used as an organic pollutant index instead of BOD or COD due to the diversity of pollutants and non-degradable problem. The primary purpose of the present study is to reveal the properties of time series data for TOC which have been measured by real-time monitoring in Juam Lake and, in particularly, to understand the long- and short-term characteristics with the extraction of the respective components based on the different return periods. For the purpose, we proposed Discrete Wavelet Transform (DWT) as the methodology. The results from the DWT showed that the different components according to the respective periodicities could be extracted from the time series data for TOC and the variation of each component with respect to time could emerge from the return periods and the respective energy ratios of the decomposed components against the raw data. Management of Water Pumping System in Coastal Area of Jeju City Based on Coastal Landscape Cho, Eun-Il;Lee, Byung-Gul 871 Water management treatment of coastal region has been an important problem in Jeju city since the distributions of pipeline of the pumping system made a bad view in coastal region. To solve the problem, we observed the pipelines that are on the surface around the coastal region from Tapdong to Doduhang. From the observations, we found that Todong and Dodu areas were not unsightliness because the all pipelines were located in underground. However, the other areas, such area Yongdam, Handugi, Yongdam fishing village, had a serious problem for the coastal landscape view. To solve the problem, at we estimated coastal land color characteristics of Jeju city based on the observation of the pipelines. The estimated color panel shows that the green, blue and grey colors are a dominant factors of the Jeju coastal region. Based on the color panel, we proposed two methods, that is, one is a short time treatment, the other is a long time one. The short is based on the colour treatment, which is pipeline colour changing into surround natural one. The long time is the construction plan design method. Although the later method was very useful in Jeju island. However, it takes a lot of time and money. Therefore, in the situation, the short time is the better than the long time one. Numerical Analysis on the Beach Erosion Prevention Capability of Submerged Breakwaters Kim, In-Chul;Yoon, Jong-Sung 881 The purpose of this research is to examine the beach erosion prevention capability of submerged breakwaters under wave energy condition. To accomplish this objective, the computational domain was divided into two do-mains : the large and the detailed domain for the Song-Do beach. For each computational domain, numerical models for calculating transformation, wave induced current and beach erosion were used and also these numerical models were carefully applied to three experimental cases such as 1) the present beach condition, 2) the condition for which submerged breakwaters are installed about 240m from the shoreline of beach enlarged by artificial nourishments. The results of this research show that if storm waves attack the present beach, the erosion occurs widely all over the beach. However, when the submerged breakwaters are installed in addition to the artificial nourishments, storm waves can be adequately controlled and strong wave induced currents occur only around the submerged breakwaters resulting in the beach evolution appearing locally only at the western end of the beach. Study of Design Flood Estimation by Watershed Characteristics Park, Ki-Bum 887 Through this research of the analysis on the frequency flood discharges regarding basin property factors, a linear regression system was introduced, and as a result, the item with the highest correlation with the frequency flood discharges from Nakdong river basin is the basin area, and the second highest is the average width of basin and the river length. The following results were obtained after looking at the multi correlation between the flood discharge and the collected basin property factors using the data from the established river maintenance master plan of the one hundred twenty-five rivers in the Nakdong river basin. The result of analysis on multivariate correlation between the flood discharges and the most basic data in determining the flood discharges as basin area, river length, basin slope, river slope, average width of basin, shape factor and probability precipitation showed more than 0.9 of correlation in terms of the multi correlation coefficient and more than 0.85 for the determination coefficient. The model which induced a regression system through multi correlation analysis using basin property factors is concluded to be a good reference in estimating the design flood discharge of unmeasured basin. Co-digestion of Thermophilic Acid-fermented Food Wastes and Sewage Sludge Ahn, Chul-Woo;Jang, Seong-Ho;Park, Jin-Sik 897 This study has been conducted to investigate biodegradation characteristics and optimum mixing ratio for co-digestion with thermophilic acid-fermented food waste and sewage sludge using batch anaerobic digester. As the basis operating conditions for anaerobic digestion, the reaction temperature was controlled $35{\pm}1^{\circ}C$ and stirrer was set 70rpm. Thermophilic acid-fermented food waste and sewage sludge were mixed at the ratio of 10:0, 7:3, 5:5, 3:7, 0:10 and 5;5(food waste : sewage sludge) as the influent substrates. In results of co-digestion according to mixing ratio of thermophilic fermented food wastes and sewage sludge in batch mesophilic anaerobic digestion reactor, $385mL\;CH_4/g\;VS_{added}$ of methane production rate at 1:1 mixing ratio was more than that of any other mixing ratios. Compared with $293mL\;CH_4/g\;VS_{added}$ of methane production rate at 1:1 mixing ratio of food wastes and sewage sludge, pretreatment of food wastes by thermophilic acid fermentation was more effective in co-digestion with sewage sludge. Questioning Styles in the Middle School Environmental Textbooks Huh, Man-Kyu;Huh, Hong-Wook;Moon, Do-Hoo;Moon, Sung-Gi 907 The study is conducted to analyze the questioning styles in three middle school environmental textbooks in terms of frequency, type, and placement of questions. It is also to analyse and compare the kinds of scientific processes elicited by the questions in the topics of textbook. The instrument was the Textbook Questioning Strategies Assesment Instrument (TQSAI) which was developed the Cooperative Teacher Preparation Program, University of California. The mean number of questions per topic was 4.0 and the ratio of questions to sentences was 3.8%. The numbers of empirical and non-empirical questions were 52.5% and 47.5% for textbook D, 56.6% and 43.4% for textbook J, and 92.7% and 7.3% for textbook K, respectively. The open-hearted question was the highest in all types of questions for three middle school environmental textbooks. The explanatory question was the highest in all characteristics of questions. The types of various questions were distributed throughout textbooks including the green field, debate-discussion, examination, and so on.
CommonCrawl
Journal of Fluid Mechanics (7) Journal of Roman Archaeology (1) Laser and Particle Beams (1) Ryan Test (6) test society (1) Acoustic impedance and hydrodynamic instability of the flow through a circular aperture in a thick plate D. Fabre, R. Longobardi, V. Citro, P. Luchini Journal: Journal of Fluid Mechanics / Volume 885 / 25 February 2020 Published online by Cambridge University Press: 18 December 2019, A11 Print publication: 25 February 2020 We study the unsteady flow of a viscous fluid passing through a circular aperture in a plate characterized by a non-zero thickness. We investigate this problem by solving the incompressible linearized Navier–Stokes equations around a laminar base flow, in both the forced case (allowing us to characterize the coupling of the flow with acoustic resonators) and the autonomous regime (allowing us to identify the possibility of purely hydrodynamic instabilities). In the forced case, we calculate the impedances and discuss the stability properties in terms of a Nyquist diagram. We show that such diagrams allow us to predict two kinds of instabilities: (i) a conditional instability linked to the over-reflexion of an acoustic wave but requiring the existence of a conveniently tuned external acoustic resonator, and (ii) a purely hydrodynamic instability existing even in a strictly incompressible framework. A parametric study is conducted to predict the range of existence of both instabilities in terms of the Reynolds number and the aspect ratio of the aperture. Analysing the structure of the linearly forced flow allows us to show that the instability mechanism is closely linked to the existence of a recirculation region within the thickness of the plate. We then investigate the autonomous regime using the classical eigenmode analysis. The analysis confirms the existence of the purely hydrodynamic instability in accordance with the impedance-based criterion. The spatial structure of the unstable eigenmodes are found to be similar to the structure of the corresponding unsteady flows computed using the forced problem. Analysis of the adjoint eigenmodes and of the adjoint-based structural sensitivity confirms that the origin of the instability lies in the recirculation region existing within the thickness of the plate. Infection surveillance and prevention strategies to detect and prevent postaccess breast tissue expander infections Sima L. Sharara, Heather M. Saunders, Valeria Fabre, Sara E. Cosgrove, Donna P. Fellerman, Clare Rock, Polly A. Trexler, Laura B. Lewis, Meg G. Bernstein, Michele A. Manahan, Justin M. Sacks, Gedge D. Rosson, Lisa L. Maragakis Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 11 / November 2019 No standardized surveillance criteria exist for surgical site infection after breast tissue expander (BTE) access. This report provides a framework for defining postaccess BTE infections and identifies contributing factors to infection during the expansion period. Implementing infection prevention guidelines for BTE access may reduce postaccess BTE infections. Integrating bedside nurses into antibiotic stewardship: A practical approach Elizabeth A. Monsees, Pranita D. Tamma, Sara E. Cosgrove, Melissa A. Miller, Valeria Fabre Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 5 / May 2019 Published online by Cambridge University Press: 21 February 2019, pp. 579-584 Nurses view patient safety as an essential component of their work and have reported a general interest in embracing an antibiotic steward role. However, antibiotic stewardship (AS) functions have not been formally integrated into nursing practice despite nurses' daily involvement in clinical activities that impact antibiotic decisions (e.g., obtaining specimens for cultures, blood drawing for therapeutic drug monitoring). Recommendations to expand AS programs to include bedside nurses are generating support at a national level, yet a practical guidance on how nurses can be involved in AS activities is lacking. In this review, we provide a framework identifying selected practices where nurses can improve antibiotic prescribing practices through appropriate obtainment of Clostridioides difficile tests, appropriate urine culturing practices, optimal antibiotic administration, accurate and detailed documentation of penicillin allergy histories and through the prompting of antibiotic time outs. We identify reported barriers to engagement of nurses in AS and offer potential solutions that include patient safety principles and quality improvement strategies that can be used to mitigate participation barriers. This review will assist AS leaders interested in advancing the contributions of nurses into their AS programs by discussing education, communication, improvement models, and workflow integration enhancements that strengthen systems to support nurses as valued partners in AS efforts. On the instabilities of a potential vortex with a free surface J. Mougel, D. Fabre, L. Lacaze, T. Bohr Journal: Journal of Fluid Mechanics / Volume 824 / 10 August 2017 Published online by Cambridge University Press: 05 July 2017, pp. 230-264 In this paper, we address the linear stability analysis of a confined potential vortex with a free surface. This particular flow has been recently used by Tophøj et al. (Phys. Rev. Lett., vol. 110(19), 2013, article 194502) as a model for the swirling flow of fluid in an open cylindrical container, driven by rotating the bottom plate (the rotating bottom experiment) to explain the so-called rotating polygons instability (Vatistas J. Fluid Mech., vol. 217, 1990, pp. 241–248; Jansson et al., Phys. Rev. Lett., vol. 96, 2006, article 174502) in terms of surface wave interactions leading to resonance. Global linear stability results are complemented by a Wentzel–Kramers–Brillouin–Jeffreys (WKBJ) analysis in the shallow-water limit as well as new experimental observations. It is found that global stability results predict additional resonances that cannot be captured by the simple wave coupling model presented in Tophøj et al. (2013). Both the main resonances (thought to be at the root of the rotating polygons) and these secondary resonances are interpreted in terms of over-reflection phenomena by the WKBJ analysis. Finally, we provide experimental evidence for a secondary resonance supporting the numerical and theoretical analysis presented. These different methods and observations allow to support the unstable wave coupling mechanism as the physical process at the origin of the polygonal patterns observed in free-surface rotating flows. Acoustic streaming and the induced forces between two spheres D. Fabre, J. Jalal, J. S. Leontini, R. Manasseh Journal: Journal of Fluid Mechanics / Volume 810 / 10 January 2017 Published online by Cambridge University Press: 25 November 2016, pp. 378-391 Print publication: 10 January 2017 The ability of acoustic microstreaming to cause a pair of particles to attract or repel is investigated. Expanding the flow around two spheres in terms of a small-amplitude parameter measuring the amplitude of the forcing, the leading order is an oscillating flow field with zero mean representing the effect of the applied acoustic field, while the second-order correction contains a steady streaming component. A modal decomposition in the azimuthal direction reduces the problem to a few linear problems in a two-dimensional domain corresponding to the meridional ( $r,z$ ) plane. The analysis computes both the intricate flow fields and the mean forces felt by both spheres. If the spheres are aligned obliquely with respect to the oscillating flow, they experience a lateral force which realigns them into a transverse configuration. In this transverse configuration, they experience an axial force which can be either attractive or repulsive. At high frequencies the force is always attractive. At low frequencies, it is repulsive. At intermediate frequencies, the force is attractive at large distances and repulsive at small distances, leading to the existence of a stable equilibrium configuration. Linear stability and weakly nonlinear analysis of the flow past rotating spheres V. Citro, J. Tchoufag, D. Fabre, F. Giannetti, P. Luchini Journal: Journal of Fluid Mechanics / Volume 807 / 25 November 2016 Print publication: 25 November 2016 We study the flow past a sphere rotating in the transverse direction with respect to the incoming uniform flow, and particularly consider the stability features of the wake as a function of the Reynolds number $Re$ and the sphere dimensionless rotation rate $\unicode[STIX]{x1D6FA}$ . Direct numerical simulations and three-dimensional global stability analyses are performed in the ranges $150\leqslant \mathit{Re}\leqslant 300$ and $0\leqslant \unicode[STIX]{x1D6FA}\leqslant 1.2$ . We first describe the base flow, computed as the steady solution of the Navier–Stokes equation, with special attention to the structure of the recirculating region and to the lift force exerted on the sphere. The stability analysis of this base flow shows the existence of two different unstable modes, which occur in different regions of the $Re/\unicode[STIX]{x1D6FA}$ parameter plane. Mode I, which exists for weak rotations ( $\unicode[STIX]{x1D6FA}<0.4$ ), is similar to the unsteady mode existing for a non-rotating sphere. Mode II, which exists for larger rotations ( $\unicode[STIX]{x1D6FA}>0.7$ ), is characterized by a larger frequency. Both modes preserve the planar symmetry of the base flow. We detail the structure of these eigenmodes, as well as their structural sensitivity, using adjoint methods. Considering small rotations, we then compare the numerical results with those obtained using weakly nonlinear approaches. We show that the steady bifurcation occurring for $Re>212$ for a non-rotating sphere is changed into an imperfect bifurcation, unveiling the existence of two other base-flow solutions which are always unstable. Waves in Newton's bucket J. Mougel, D. Fabre, L. Lacaze The motion of a liquid in an open cylindrical tank rotating at a constant rate around its vertical axis of symmetry, a configuration called Newton's bucket, is investigated using a linear stability approach. This flow is shown to be affected by several families of waves, all weakly damped by viscosity. The wave families encountered correspond to: surface waves which can be driven either by gravity or centrifugal acceleration, inertial waves due to Coriolis acceleration which are singular in the inviscid limit, and Rossby waves due to height variations of the fluid layer. These waves are described in the inviscid and viscous cases by means of mathematical considerations, global stability analysis and various asymptotic methods; and their properties are investigated over a large range of parameters $(a,Fr)$ , with $a$ the aspect ratio and $Fr$ the Froude number. Waves and instabilities in rotating free surface flows Journal: Mechanics & Industry / Volume 15 / Issue 2 / 2014 The stability properties of the rotating free surface flow in a cylindrical container is studied using a global stability approach, considering successively three models. For the case of solid body rotation (Newton's bucket), all eigenmodes are found to be stable, and are classified into three families: gravity waves, singular inertial modes, and Rossby waves. For the case of a potential flow, an instability is found. The mechanism is explained as a resonance between gravity waves and centrifugal waves, and is thought to be at the origin of the "rotating polygon instability" observed in experiments where the flow is driven by rotation of the bottom plate (see L. Tophøj, J. Mougel, T. Bohr, D. Fabre, The Rotating Polygon Instability of a Swirling Free Surface Flow, Phys. Rev. Lett. 110 (2013) 194502). Finally, in the case of the Rankine vortex which in fact consists in the combination of the two first cases, we report a new instability mechanism involving Rossby and gravity waves. New insights into the influence of breed and time of the year on the response of ewes to the 'ram effect' A. Chanvallon, L. Sagot, E. Pottier, N. Debus, D. François, T. Fassier, R. J. Scaramuzzi, C. Fabre-Nys Journal: animal / Volume 5 / Issue 10 / 26 August 2011 Published online by Cambridge University Press: 27 May 2011, pp. 1594-1604 Exposure of anoestrous ewes to rams induces an increase in LH secretion, eventually leading to ovulation. This technique therefore is an effective, low-cost and hormone-free way of mating sheep outside the breeding season. However, the use of this technique is limited by the variability of the ewes' responses. In this study, our objective was to understand more completely the origins of this variability and to determine the relative roles of breed, the point in time during anoestrus and the depth of anoestrus on the response to the 'ram effect'. In the first experiment, the pattern of anoestrus on the basis of the concentration of progesterone determined weekly, was determined in four breeds including two less seasonal (Mérinos d'Arles and Romane), one highly seasonal (Mouton Vendéen) and one intermediate (Île-de-France) breeds. Anoestrus was longer and deeper in Mouton Vendéen and Île-de-France than in Romane or Mérinos d'Arles. In the second experiment, we used the same four breeds and tested their hypophyseal response to a challenge with a single dose of 75 ng gonadotrophin-releasing hormone (GnRH) in early, mid and late anoestrus, and then we examined their endocrine and ovarian responses to the 'ram effect'. Most (97%) ewes responded to GnRH and most (93%) showed a short-term increase in LH pulsatility following the 'ram effect'. The responses in both cases were higher in females that went on to ovulate, suggesting that the magnitude of the hypophyseal response to a GnRH challenge could be a predictor of the response to the 'ram effect'. As previously observed, the best ovarian response was in Mérinos d'Arles at the end of anoestrus. However, there was no relationship between the proportion of females in the flock showing spontaneous ovulation and the response to the 'ram effect' of anoestrous ewes from the same flock. Structure of a steady drain-hole vortex in a viscous fluid L. BØHLING, A. ANDERSEN, D. FABRE We use direct numerical simulations to study a steady bathtub vortex in a cylindrical tank with a central drain-hole, a flat stress-free surface and velocity prescribed at the inlet. We find that the qualitative structure of the meridional flow does not depend on the radial Reynolds number, whereas we observe a weak overall rotation at a low radial Reynolds number and a concentrated vortex above the drain-hole at a high radial Reynolds number. We introduce a simple analytically integrable model that shows the same qualitative dependence on the radial Reynolds number as the simulations and compares favourably with the results for the radial velocity and the azimuthal velocity at the surface. Finally, we describe the height dependence of the radius of the vortex core and the maximum of the azimuthal velocity at a high radial Reynolds number, and we show that the data on the radius of the vortex core and the maximum of the azimuthal velocity as functions of height collapse on single curves by appropriate scaling. Stochastic forcing of the Lamb–Oseen vortex J. FONTANE, P. BRANCHER, D. FABRE Journal: Journal of Fluid Mechanics / Volume 613 / 25 October 2008 Print publication: 25 October 2008 The aim of the present paper is to analyse the dynamics of the Lamb–Oseen vortex when continuously forced by a random excitation. Stochastic forcing is classically used to mimic external perturbations in realistic configurations, such as variations of atmospheric conditions, weak compressibility effects, wing-generated turbulence injected into aircraft wakes, or free-stream turbulence in wind tunnel experiments. The linear response of the Lamb–Oseen vortex to stochastic forcing can be decomposed in relation to the azimuthal symmetry of the perturbation given by the azimuthal wavenumber m. In the axisymmetric case m = 0, we find that the response is characterized by the generation of vortex rings at the outer periphery of the vortex core. This result is consistent with recurrent observations of such dynamics in the study of vortex–turbulence interaction. When considering helical perturbations m = 1, the response at large axial wavelengths consists of a global translation of the vortex, a feature very similar to the phenomenon of vortex meandering (or wandering) observed experimentally, corresponding to an erratic displacement of the vortex core. At smaller wavelengths, we find that stochastic forcing can excite specific oscillating modes of the Lamb–Oseen vortex. More precisely, damped critical-layer modes can emerge via a resonance mechanism. For perturbations with higher azimuthal wavenumber m ≥ 2, we find no structure that clearly dominates the response of the vortex. Aladin: The First European Lidar in Space Didier Morançais, Frédéric Fabre Published online by Cambridge University Press: 01 February 2011, FF7.2 After several decades of observations from space, direct measurements of the global threedimensional wind field remain elusive, however crucial to weather predictions. The ALADIN instrument, payload of the AEOLUS satellite (figure 1), will provide measurements of atmospheric wind profiles with global Earth coverage for the climatology and meteorology users. The AEOLUS programme is sponsored by the European Space Agency with a launch planned in 2008. ALADIN belongs to a new class of Earth Observation payloads and will be the first European Lidar in space. The instrument comprises a diode-pumped high energy Nd:YAG laser and a direct detection receiver operating on aerosol and molecular backscatter signals in parallel. In addition to the Flight Model (FM), two instrument models are developed: a Pre-development Model (PDM) and an Opto-Structure-Thermal Model (OSTM). The OSTM integration has been completed and the flight equipments are under manufacturing. This paper describes the instrument design as well as the development status. The ALADIN instrument is developed under prime contractor EADS Astrium SAS with a consortium of thirty companies. Fine needle aspiration in the pre-operative diagnosis of melanotic neuroectodermal tumour of infancy H. Galera-Ruiz, D. Gomez-Angel, F. J. Vazquez-Ramirez, J. C. Sanguino-Fabre, C. I. Salazar-Fernandez, J. Gonzalez-Hachero Journal: The Journal of Laryngology & Otology / Volume 113 / Issue 6 / June 1999 A case of melanotic neuroectodermal tumour of infancy is decribed. The pre-operative diagnosis was made on cytological material obtained by fine needle aspiration. The patient was a three-month-old male infant with a rapidly growing maxillary tumour mass that also involved the pterygomaxillary fossae and the floor of the orbit. In addition to the typical clinical presentation, the cytology is also distinctive showing a dual population of small neuroblastic cells and large melanin-containing epithelial cells. Histological, immunohistochemical and electron microscopic examination of the excised mass confirmed the initial diagnosis. The pre-operative distinction of this tumour from other small round cell tumours of infancy (rhabdomyosarcoma, neuroblastoma, melanoma and lymphoma), is essential in order to plan the most complete resection therefore reducing the possibilities of tumour recurrence. This tumour belongs to a field of pathology with which many otolaryngologists may not be familiar. Non-Linear Light Scattering in a Two Component Medium Optical Limiting Application V. Joudrier, J C. Fabre, P. Bourdon, F. Hache, C Flytzanis The first experimental investigation of an optical limiting device based on a suspension of spherical particles in a surrounding liquid is presented. The expected property is the non-linear light scattering based on a refractive index mismatch between the two components that appears at high intensity. Several experiments performed at 532 nm with nanosecond laser pulses provide good indication that such non-linear scattering is observed. Nonlinear Mechanisms in Carbon-Black Suspension in a Limiting Geometry Francois Fougeanet, Jean-Claude Fabre Although Carbon-Black suspensions (CBS) have been studied extensively for optical limiting applications, the microscopic mechanisms responsible for their limiting behavior still remain unclear. To study the mechanisms leading to the nonlinearity, we have performed a pump-probe experiment coupled with plasma emission measurements. Time resolved probe transmission have permitted to differentiate bubbles from plasma effects in the case of Carbon-Black in CS2. Other effects are discussed in terms of limiting performance. A new technique using cultured epithelial sheets for the management of epistaxis associated with hereditary haemorrhagic telangiectasia Catherine M. Milton, J. C. Shotton, D. J. Premachandran, Barbara M. Woodward, J. W. Fabre, R. J. Sergeant A new technique for the treatment of severe epistaxis associated with hereditary haemorrhagic telangiectasia is described. The nasal septum and inferior turbinates, surgically denuded of respiratory epithelium, were grafted using autografts of cultured epithelial sheets derived from buccal epithelium. All patients upon whom this technique has been used have shown considerable lessening in the frequency and severity of their epistaxes although two patients received grafts on two occasions, in each case approximately three months apart. It is postulated that a nasal lining of stratified squamous epithelium is likely to be more resistant to trauma than the normal respiratory type, and this is supported by the observation that bleeds very seldom occur from the oral cavity in this syndrome. Recent results on implosions directly driven at λ = 0·26-μm laser wavelength M. Koenig, V. Malka, E. Fabre, P. Hammerling, A. Michard, J. M. Boudenne, D. Batani, J. P. Garçonnet, P. Fews Journal: Laser and Particle Beams / Volume 10 / Issue 4 / December 1992 New diagnostics were implemented on the implosion experiments performed at LULI to improve our measurements of hydroefficiencies: Neutron chronometry gives the time of emission of the fusion reaction products as measured from the peak of the laser pulse; thereby making it possible to correlate the neutron emission with X-ray emission. Core imaging, based upon a maximum entropy reconstruction technique, leads to core size determination and also is a promising diagnostic for wall nonuniformities induced by irradiation conditions. A simple model is developed to retrieve experimental spectra of α-particles. Treatment of chronic mastoiditis by grafting of mastoid cavities with autologous epithelial layers generated by in vitro culture of buccal epithelium D. J. Premachandra, B. Woodward, C. M. Milton, R. J. Sergeant, J. W. Fabre Autologous cultured epithelial layers were established from biopsies from the mucosa of the cheek, a nonkeratinizing region of the oral cavity. These were grafted to the unepithelialized mastoid cavities of nine patients with chronic mastoiditis and severe otorrhoea varying from two to 30 years' duration. All procedures were performed on an out-patient basis, with no anaesthesia except for topical anaesthesia for the mucosal biopsy. In seven of the patients the grafts took well, with complete resolution of the otorrhoea for a minimum follow-up period of eight months. In one patient there was a partial take of the graft with substantial improvement in the rate of discharge. The mastoid cavities of two patients were biopsied five months after grafting, and demonstrated a stratified squamous epithelium, with keratinization of the epithelium clearly evident. Structure and Dynamics of the Ferrosmectic Phase Pascale Fabre We have elaborated a system -that we call a ferrosmectic- which contains magnetic particles included in a lamellar phase. Because of the presence of the confined particles, this phase exhibits specific features that are revealed by submitting it to a magnetic field as well as by neutrons or quasi-elastic light scattering experiments. We present here a review of the different properties of this ferrosmectic phase. From Lugdunum to Convenae: recent work on Saint-Bertrand-de-Comminges (Haute-Garonne) J. Guyon, P. Aupert, C. Dieulafait, G. Fabre, J. Gallagher, M. Janon, J.-M. Pailler, J.-L. Paillet, C. Petit, R. Sablayrolles, D. Schaad, J.-L. Schenk, F. Tassaux Journal: Journal of Roman Archaeology / Volume 4 / 1991 Published online by Cambridge University Press: 16 February 2015, pp. 89-122
CommonCrawl
Home All issues Volume 529 (May 2011) A&A, 529 (2011) A62 Full HTML Volume 529, May 2011 2. Dynamical equations 3. Computing resources ... 4. Simulation parameters 5. Gas evolution 6. Particle evolution 7. Self-gravity – ... 9. Summary and discussion High-resolution simulations of planetesimal formation in turbulent protoplanetary discs A. Johansen1,⋆, H. Klahr2 and Th. Henning2 1 Lund Observatory, Box 43, 221 00 Lund, Sweden e-mail: [email protected] 2 Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany We present high-resolution computer simulations of dust dynamics and planetesimal formation in turbulence generated by the magnetorotational instability. We show that the turbulent viscosity associated with magnetorotational turbulence in a non-stratified shearing box increases when going from 2563 to 5123 grid points in the presence of a weak imposed magnetic field, yielding a turbulent viscosity of α ≈ 0.003 at high resolution. Particles representing approximately meter-sized boulders concentrate in large-scale high-pressure regions in the simulation box. The appearance of zonal flows and particle concentration in pressure bumps is relatively similar at moderate (2563) and high (5123) resolution. In the moderate-resolution simulation we activate particle self-gravity at a time when there is little particle concentration, in contrast with previous simulations where particle self-gravity was activated during a concentration event. We observe that bound clumps form over the next ten orbits, with initial birth masses of a few times the dwarf planet Ceres. At high resolution we activate self-gravity during a particle concentration event, leading to a burst of planetesimal formation, with clump masses ranging from a significant fraction of to several times the mass of Ceres. We present a new domain decomposition algorithm for particle-mesh schemes. Particles are spread evenly among the processors and the local gas velocity field and assigned drag forces are exchanged between a domain-decomposed mesh and discrete blocks of particles. We obtain good load balancing on up to 4096 cores even in simulations where particles sediment to the mid-plane and concentrate in pressure bumps. Key words: accretion, accretion disks / methods: numerical / magnetohydrodynamics (MHD) / planets and satellites: formation / planetary systems / turbulence Work partially done at Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The Netherlands. The formation of km-scale planetesimals from dust particles involves a complex interplay of physical processes, including most importantly collisional sticking (Weidenschilling 1984, 1997; Dullemond & Dominik 2005), the self-gravity of the particle mid-plane layer (Safronov 1969; Goldreich & Ward 1973; Sekiya 1998; Youdin & Shu 2002; Schräpler & Henning 2004; Johansen et al. 2007), and the motion and structure of the turbulent protoplanetary disc gas (Weidenschilling & Cuzzi 1993; Johansen et al. 2006; Cuzzi et al. 2008). In the initial growth stages micrometer-sized silicate monomers readily stick to form larger dust aggregates (Poppe et al. 2000; Blum & Wurm 2008). Further growth towards macroscopic sizes is hampered by collisional fragmentation and bouncing (Zsom et al. 2010), limiting the maximum particle size to a few cm or less (depending on the assumed velocity threshold for collisional fragmentation, see Brauer et al. 2008a; Birnstiel et al. 2009). High-speed collisions between small impactors and a large target constitutes a path to net growth (Wurm et al. 2005), but the transport of small particles away from the mid-plane by turbulent diffusion limits the resulting growth rate dramatically (Johansen et al. 2008). Material properties are also important. Wada et al. (2009) demonstrated efficient sticking between ice aggregates consisting of 0.1 μm monomers at speeds up to 50 m/s. Turbulence can play a positive role for growth by concentrating mm-sized particles in convection cells (Klahr & Henning 1997) and between small-scale eddies (Cuzzi et al. 2008) occurring near the dissipative scale of the turbulence. Larger m-sized particles pile up on large scales (i.e. larger than the gas scale height) in long-lived geostrophic pressure bumps surrounded by axisymmetric zonal flows (Johansen et al. 2009a). In the model presented in Johansen et al. (2007, hereafter referred to as J07), approximately meter-sized particles settle to form a thin mid-plane layer in balance between sedimentation and stirring by the gas which has developed turbulence through the magnetorotational instability (Balbus & Hawley 1991). Particles then concentrate in nearly axisymmetric gas high-pressure regions which appear spontaneously in the turbulent flow (Fromang & Nelson 2005; Johansen et al. 2006; Lyra et al. 2008a), reaching local column densities up to ten times the average. The passive concentration is augmented as particles locally accelerate the gas towards the Keplerian speed, which leads to accumulation of particles drifting rapidly in from exterior orbits (a manifestation of the streaming instability of Youdin & Goodman 2005). The gravitational attraction between the particles in the overdense regions becomes high enough to initiate first a slow radial contraction, and as the local mass density becomes comparable to the Roche density, a full non-axisymmetric collapse to form gravitationally bound clumps with masses comparable to the 950-km-diameter dwarf planet Ceres (MCeres ≈ 9.4 × 1020 kg). Such large planetesimal birth sizes are in agreement with constraints from the current observed size distribution of the asteroid belt (Morbidelli et al. 2009) and Neptune Trojans (Sheppard & Trujillo 2010). Some of the open questions related to this picture of planetesimal formation is to what degree the results of Johansen et al. (2007) are affected by the fact that self-gravity was turned on after particles had concentrated in a pressure bumps and how the emergence and amplitude of pressure bumps are affected by numerical resolution. In this paper we present high-resolution and long-time-integration simulations of planetesimal formation in turbulence caused by the magnetorotational instability (MRI). We find that the large-scale geostrophic pressure bumps that are responsible for particle concentration are sustained when going from moderate (2563) to high (5123) resolution. Particle concentration in these pressure bumps is also relatively independent on resolution. We present a long-time-integration simulation performed at moderate resolution (2563) where particles and self-gravity are started at the same time, in contrast to earlier simulations where self-gravity was not turned on until a strong concentration event occurred (J07). We also study the initial burst of planetesimal formation at 5123 resolution. We present evidence for collisions between gravitationally bound clumps, observed at both moderate and high resolution, and indications that the initial mass function of gravitationally bound clumps involves masses ranging from a significant fraction of to several times the mass mass of Ceres. We point out that the physical nature of the collisions is unclear, since our numerical algorithm does not allow clumps to contract below the grid size. Gravitational scattering and binary formation are other possible outcomes of the close encounters, in case of resolved dynamics. Finding the initial mass function of planetesimals forming from the gravitationally bound clumps will ultimately require an improved algorithm for the dynamics and interaction of bound clumps as well as the inclusion of particle shattering and coagulation during the gravitational contraction. The paper is organised as follows. In Sect. 2 we describe the dynamical equations for gas and particles. Section 3 contains descriptions of a number of improvements made to the Pencil Code in order to be able to perform particle-mesh simulations at up to at least 4096 cores. In Sect. 4 we explain the choice of simulation parameters. The evolution of gas turbulence and large-scale pressure bumps is analysed in Sect. 5. Particle concentration in simulations with no self-gravity is described in Sect. 6. Simulations including particle self-gravity are presented in Sect. 7 (2563 resolution) and Sect. 8 (5123 resolution). We summarise the paper and discuss the implications of our results in Sect. 9. We perform simulations solving the standard shearing box MHD/drag force/self-gravity equations for gas defined on a fixed grid and solid particles evolved as numerical superparticles. We use the Pencil Code, a sixth order spatial and third order temporal symmetric finite difference code1. We model the dynamics of a protoplanetary disc in the shearing box approximation. The coordinate frame rotates at the Keplerian frequency Ω at an arbitrary distance r0 from the central star. The axes are oriented such that the x points radially away from the central gravity source, y points along the Keplerian flow, while z points vertically out of the plane. 2.1. Gas velocity The equation of motion for the gas velocity u relative to the Keplerian flow is (1)The left hand side includes advection both by the velocity field u itself and by the linearised Keplerian flow . The first two terms on the right hand side represent the Coriolis force in the x- and y-directions, modified in the y-component by the radial advection of the Keplerian flow, . The third term mimics a global radial pressure gradient which reduces the orbital speed of the gas by the positive amount Δv. The fourth and fifth terms in Eq. (1) are the Lorentz and pressure gradient forces. The current density is calculated from Ampère's law μ0J = ∇ × B. The Lorentz force is modified to take into account a mean vertical field component of strength B0. The sixth term is a drag force term is described in Sect. 2.4. The high-order numerical scheme of the Pencil Code has very little numerical dissipation from time-stepping the advection term (Brandenburg 2003), so we add explicit viscosity through the term fν(u) in Eq. (1). We use sixth order hyperviscosity with a constant dynamic viscosity μ3 = ν3ρ, (2)This form of the viscosity conserves momentum. The ∇6 operator is defined as ∇6 = ∂6/∂x6 + ∂6/∂y6 + ∂6/∂z6. It was shown by Johansen et al. (2009a) that hyperviscosity simulations show zonal flows and pressure bumps very similar to simulations using Navier-Stokes viscosity. 2.2. Gas density The continuity equation for the gas density ρ is (3)The diffusion term is defined as (4)where D3 is the hyperdiffusion coefficient necessary to suppress Nyquist scale wiggles arising in regions where the spatial density variation is high. We adopt an isothermal equation of state with pressure and (constant) sound speed cs. 2.3. Induction equation The induction equation for the magnetic vector potential A is (see Brandenburg et al. 1995, for details) (5)The resistivity term is (6)where η3 is the hyperresistivity coefficient. The magnetic field is calculated from B = ∇ × A. 2.4. Particles and drag force scheme The dust component is treated as a number of individual superparticles, the position x and velocity v of each evolved according to The particles feel no pressure or Lorentz forces, but are subjected to the gravitational potential Φ of their combined mass. Particle collisions are taken into account as well (see Sect. 2.7 below). Two-way drag forces between gas defined on a fixed grid and Lagrangian particles are calculated through a particle-mesh method (see Youdin & Johansen 2007, for details). First the gas velocity field is interpolated to the position of a particle, using second order spline interpolation. The drag force on the particle is then trivial to calculate. To ensure momentum conservation we then take the drag force and add it with the opposite sign among the 27 nearest grid points, using the Triangular Shaped Cloud scheme to ensure momentum conservation in the assignment (Hockney & Eastwood 1981). 2.5. Units All simulations are run with natural units, meaning that we set cs = Ω = μ0 = ρ0 = 1. Here ρ0 represents the mid-plane gas density, which in our unstratified simulations is the same as the mean density in the box. The time and velocity units are thus [t] = Ω-1 and [v] = cs. The derived unit of the length is the scale height H ≡ cs/Ω = [l] . The magnetic field unit is . 2.6. Self-gravity The gravitational attraction between the particles is calculated by first assigning the particle mass density ρp on the grid, using the Triangular Shaped Cloud scheme described above. Then the gravitational potential Φ at the grid points is found by inverting the Poisson equation (9)using a fast Fourier transform (FFT) method (see online supplement of J07). Finally the self-potential of the particles is interpolated to the positions of the particles and the acceleration added to the particle equation of motion (Eq. (8)). We define the strength of self-gravity through the dimensionless parameter (10)where ρ0 is the gas density in the mid-plane. This is practical since all simulations are run with Ω = ρ0 = cs = 1. Using for the gas column density, we obtain a connection between the dimensionless and the relevant dimensional parameters of the box, (11)We assume a standard ratio of particle to gas column densities of 0.01. The self-gravity of the gas is ignored in both the Poisson equation and the gas equation of motion. 2.7. Collisions Particle collisions become important inside dense particle clumps. In J07 the effect of particle collisions was included in a rather crude way by reducing the relative rms speed of particles inside a grid cell on the collisional time-scale, to mimic collisional cooling. Recently Rein et al. (2010) found that the inclusion of particle collisions suppresses condensation of small scale clumps from the turbulent flow and favours the formation of larger structures. Rein et al. (2010) claimed that the lack of collisions is an inherent flaw in the superparticle approach. However, Lithwick & Chiang (2007) presented a scheme for the collisional evolution of particle rings whereby the collision between two particles occupying the same grid cell is determined by drawing a random number (to determine whether the two particles are at the same vertical position). We have extended this algorithm to model collisions between superparticles based on a Monte Carlo algorithm. We obtain the correct collision frequency by letting nearby superparticle pairs collide on the average once per collisional time-scale of the swarms of physical particles represented by each superparticle. We have implemented the superparticle collision algorithm in the Pencil Code and will present detailed numerical tests that show its validity in a paper in preparation (Johansen, Lithwick & Youdin, in prep.). The algorithm gives each particle a chance to interact with all other particles in the same grid cell. The characteristic time-scale for a representative particle in the swarm of superparticle i to collide with any particle from the swarm represented by superparticle j is calculated by considering the number density nj represented by each superparticle, the collisional cross section σij of two swarm particles, and the relative speed δvij of the two superparticles. For each possible collision a random number P is chosen. If P is smaller than δt/τcoll, where δt is the time-step set by magnetohydrodynamics, then the two particles collide. The collision outcome is determined as if the two superparticles were actual particles with radii large enough to touch each other. By solving for momentum conservation and energy conservation, with the possibility for inelastic collisions to dissipate kinetic energy to heat and deformation, the two colliding particles acquire their new velocity vectors instantaneously. All simulations include collisions with a coefficient of restitution of ϵ = 0.3 (e.g Blum & Muench 1993), meaning that each collision leads to the dissipation of approximately 90% of the relative kinetic energy to deformation and heating of the colliding boulders. We include particle collisions here to obtain a more complete physical modelling. Detailed tests and analysis of the effect of particle collisions on clumping and gravitational collapse will appear in a future publication (Johansen, Lithwick & Youdin, in prep.). 3. Computing resources and code optimisation For this project we were kindly granted access to 4096 cores on the "Jugene" Blue Gene/P system at the Jülich Supercomputing Centre (JSC) for a total of five months. Each core at the BlueGene/P has a clock speed of around 800 MHz. The use of the Pencil Code on several thousand cores required both trivial and more fundamental changes to the code. We describe these technical improvements in detail in this appendix. In the following nxgrid, nygrid, nzgrid refer to the full grid dimension of the problem. We denote the processor number by ncpus and the directional processor numbers as nprocx, nprocy, nprocz. 3.1. Changes made to the Pencil Code 3.1.1. Memory optimisation We had to remove several uses of global arrays, i.e. 2-D or 3-D arrays of linear size equal to the full grid. This mostly affected certain special initial conditions and boundary conditions. An additional problem was the use of an array of size (ncpus,ncpus) in the particle communication. This array was replaced by a 1-D array with no problems. The runtime calculation of 2-D averages (e.g. gas and particle column density) was done in such a way that the whole (nxgrid, nygrid) array was collected at the root processor in an array of size (nxgrid, nygrid, ncpus), before appending a chunk of size (nxgrid, nygrid) to an output file. The storage array, used for programming convenience in collecting the 2-D average from all the relevant cores, became excessively large at high resolution and processor numbers, and we abandoned the previous method in favour of saving chunks of the column density array into separate files, each maintained by the root processors in the z-direction. A similar method was implemented for y-averages and x-averages. The above-mentioned global arrays had been used in the code for programming convenience and did not imply excessive memory usage at moderate resolution and processor numbers. Purging those arrays in favour of loops or smaller 1-D arrays was relatively straight-forward. 3.1.2. Particle migration At the end of a sub-time-step each processor checks if any particles have left its spatial domain. Information about the number of migrating particles, and the processors that they will migrate into, is collected at each processor. The Pencil Code would then let all processors exchange migrating particles with all other processors. In practice particles would of course only migrate to neighbouring processors. However, at processor numbers of 512 or higher, the communication load associated with each processor telling all other processors how many particles it wanted to send (mostly zero) was so high that it dominated over both the MHD and the particle evolution equations. Since particles in practice only migrate to the neighbouring processors any way, we solved this problem by letting the processors only communicate the number of migrating particles to the immediate neighbours. Shear-periodic boundary conditions require a (simple) algorithm to determine the three neighbouring processors over the shearing boundary in the beginning of each sub-time-step. 3.2. Timings Scaling test for particle-mesh problem with 5123 grid cells and 64 × 106 particles. The particles are distributed evenly over the grid, so that each core has the same number of particles. The inverse code speed is normalised by the number of time-steps and by either the total number of grid points and particles (top panel) or by the number of grid points and particles per core (bottom panel). With the changes described in Sects. 3.1.1 and 3.1.2 the Pencil Code can be run with gas and particles efficiently at several thousand processors, provided that the particles are well-mixed with the gas. In Fig. 1 we show timings for a test problem with 5123 grid cells and 64 × 106 particles. The particles are distributed evenly over the grid, avoiding load balancing challenges described below. We evolve gas and particles for 1000 time-steps, the gas and particles subject to standard shearing box hydrodynamics and two-way drag forces. The lines show various drag force schemes – NGP corresponding to Nearest Grid Point and TSC to Triangular Shaped Cloud (Hockney & Eastwood 1981). We achieve near perfect scaling up to 4096 cores. Including self-gravity by an FFT method the code slows down by approximately a factor of two, but the scaling is only marginally worse than optimal, with less than 50% slowdown at 4096 cores. This must be seen in the light of the fact that 3-D FFTs involve several transpositions of the global density array, each transposition requiring the majority of grid points to be communicated to another processor (see online supplement of J07). 3.3. Particle parallelisation At high resolution it becomes increasingly more important to parallelise efficiently along at least two directions. In earlier publications we had run 2563 simulations at 64 processors, with the domain decomposition nprocy=32 and nprocz=2 (J07). Using two processors along the z-direction exploits the intrinsic mid-plane symmetry of the particle distribution, while the Keplerian shear suppresses most particle density variation in the azimuthal direction, so that any processor has approximately the same number of particles. However, at higher resolution we need to either have nprocz>2 or nprocx>1, both of which is subject to particle clumping (from either sedimentation or from radial concentrations). This would in some cases slow down the code by a factor of 8–10. We therefore developed an improved particle parallelisation, which we denote particle block domain decomposition (PBDD). This new algorithm is described in detail in the following subsection. 3.3.1. Particle block domain decomposition Simulation parameters in natural units. The steps in PBDD scheme are as follows: the fixed mesh points are domain-decomposed in the usual way(with ncpus=nprocx × nprocy × nprocz); particles on each processor are counted in bricks of size nbx × nby × nbz (typically nbx = nby = nbz = 4); bricks are distributed among the processors so that each processor has approximately the same number of particles; adopted bricks are referred to as blocks; the Pencil Code uses a third order Runge-Kutta time-stepping scheme. In the beginning of each sub-time-step particles are counted in blocks and the block counts communicated to the bricks on the parent processors. The particle density assigned to ghost cells is folded across the grid, and the final particle density (defined on the bricks) is communicated back to the adopted blocks. This step is necessary because the drag force time-step depends on the particle density, and each particle assigns density not just to the nearest grid point, but also to the neighbouring grid points; in the beginning of each sub-time-step the gas density and gas velocity field is communicated from the main grid to the adopted particle blocks; drag forces are added to particles and back to the gas grid points in the adopted blocks. This partition aims at load balancing the calculation of drag forces; at the end of each sub-time-step the drag force contribution to the gas velocity field is communicated from the adopted blocks back to the main grid. To illustrate the advantage of this scheme we show in Fig. 2 the instantaneous code speed for a problem where the particles have sedimented to the mid-plane of the disc. The grid resolution is 2563 and we run on 2048 cores, with 64 cores in the y-direction 32 cores in the z-direction. The blue (black) line shows the results of running with standard domain decomposition, while the orange (grey) line shows the speed with the improved PBDD scheme. Due to the concentration of particles in the mid-plane the standard domain decomposition leaves many cores with few or no particles, giving poor load balancing. This problem is alleviated by the use of the PBDD scheme (orange/grey line). PBDD works well as long as the single blocks do not achieve higher particle density than the optimally distributed particle number npar/ncpus. In the case of strong clumping, e.g. due to self-gravity, the scheme is no longer as efficient. In such extreme cases one should ideally limit the local particle number in clumps by using sink particles. Code speed as a function of simulation time (top panel) and maximum particle number on any core (bottom panel) for 2563 resolution on 2048 cores. Standard domain decomposition (SDD) quickly becomes unbalanced with particles and achieves only the speed of the particle-laden mid-plane cores. With the Particle Block Domain Decomposition (PBDD) scheme the speed stays close to its optimal value, and the particle number per core (bottom panel) does not rise much beyond 104. The main simulations of the paper focus on the dynamics and self-gravity of solid particles moving in a gas flow which has developed turbulence through the magnetorotational instability (Balbus & Hawley 1991). We have performed two moderate-resolution simulations (with 2563 grid points and 8 × 106 particles) and one high-resolution simulation (5123 grid points and 64 × 106 particles). Simulation parameters are given in Table 1. We use a cubic box of dimensions (1.32H)3. Note that we use a sub-Keplerian speed difference of Δv = 0.05 which is higher than Δv = 0.02 presented in the main paper of J07. The ability of pressure bumps to trap particles is generally reduced with increasing Δv (see online supplementary information for J07). Particle clumping by streaming instabilities also becomes less efficient as Δv is increased (J07; Bai & Stone 2010c). Estimates of the radial pressure support in discs can be extracted from models of column density and temperature structure. A gas parcel orbiting at a radial distance r from the star, where the disc aspect ratio is H/r and the mid-plane radial pressure gradient is dlnP/dlnr, orbits at a sub-Keplerian speed v = vK − Δv. The speed reduction Δv is given by (12)In the Minimum Mass Solar Nebula of Hayashi (1981) dlnP/dlnr = −3.25 in the mid-plane (e.g. Youdin 2010). The resulting scaling of the sub-Keplerian speed with the orbital distance is (13)The slightly colder disc model used by Brauer et al. (2008a) yields instead (14)In more complex disc models the pressure profile is changed e.g. in interfaces between regions of weak and strong turbulence (Lyra et al. 2008b). We use throughout this paper a fixed value of Δv/cs = 0.05. Another difference from the simulations of J07 is that the turbulent viscosity of the gas flow is around 2–3 times higher, because of the increased turbulent viscosity when going from 2563 to 5123 (see Sect. 5). Therefore we had to use a stronger gravity in this paper, compared to in J07, to see bound particle clumps (planetesimals) forming. We discuss the implications of using a higher disc mass further in our conclusions. In all simulations we divide the particle component into four size bins, with friction time Ωτf = 0.25,0.50,0.75,1.00, respectively. The particles drift radially because of the headwind from the gas orbiting at velocity uy = −Δv relative to the Keplerian flow (Weidenschilling 1977a). It is important to consider a distribution of particle sizes, since the dependence of the radial drift on the particle sizes can have a negative impact on the ability of the particle mid-plane layer to undergo gravitational collapse (Weidenschilling 1995). The Minimum Mass Solar Nebula has column density Σg = 1700(r/AU)-1.5 g cm-2 ≈ 150 g cm-2 at r = 5 AU (Hayashi 1981), and thus according to Eq. (11). Since we use ~ 10 times higher value for , the mean free path of gas molecules is only λ ~ 10 cm. Therefore our choice of dimensionless friction times Ωτf = 0.25,0.50,0.75,1.00 puts particles in the Stokes drag force regime (Weidenschilling 1977a). Here the friction time is independent of gas density, and the Stokes number Ωτf is proportional to particle radius squared, so Ωτf = 0.25,0.50,0.75,1.00 correspond to physical particle sizes ranging from 40 cm to 80 cm (see online supplement of J07). Scaling Eq. (11) to more distant orbital locations gives smaller physical particles and a gas column density closer to the Minimum Mass Solar Nebula value, since self-gravity is more efficient in regions where the rotational support is lower. There are several points to be raised about our protoplanetary disc model. The high self-gravity parameter that we use implies not only a very high column density, but also that the gas component is close to gravitational instability. The self-gravity parameter is connected to the more commonly used Q (where Q > 1 is the axisymmetric stability criterion for a flat disc in Keplerian rotation, see Safronov 1960; Toomre 1964) through . Thus we have Q ≈ 3.2, which means that gas self-gravity should start to affect the dynamics (the disc is not formally gravitationally unstable, but the disc is massive enough to slow down the propagation of sound waves). Another issue with such a massive disc is our assumption of ideal MHD. The high gas column density decreases the ionisation by cosmic rays and X-rays and increases the recombination rate on dust grains (Sano et al. 2000; Fromang et al. 2002). Lesur & Longaretti (2007, 2011) have furthermore shown that the ratio of viscosity to resistivity, the magnetic Prandtl number, affects both small-scale and large-scale dynamics of saturated magnetorotational turbulence. Ideally all these effects should be taken into account. However, in this paper we choose to focus on the dynamics of solid particles in gas turbulence. Thus we include many physical effects that are important for particles (drag forces, self-gravity, collisions), while we ignore many other effects that would determine the occurrence and strength of gas turbulence. This approach allows us to perform numerical experiments which yield insight into planetesimal formation with relatively few free parameters. 4.1. Initial conditions Maxwell and Reynolds stresses as a function of time. The Reynolds stress is approximately five times lower than the Maxwell stress. There is a marked increase in the turbulent stresses when increasing the resolution from 2563 to 5123 at a fixed mean vertical field B0 = 0.0015, likely due to better resolution of the most unstable MRI wavelength at higher resolution. Using B0 = 0.003 at 2563 gives turbulence properties more similar to 5123. The gas is initialised to have unit density everywhere in the box. The magnetic field is constant B = B0ez. The gas velocity field is set to be sub-Keplerian with uy = −Δv, and we furthermore perturb all components of the velocity field by small random fluctuations with amplitude δv = 0.001, to seed modes that are unstable to the magnetorotational instability. In simulations with particles we give particles random initial positions and zero velocity. We start by describing the evolution of gas without particles, since the large-scale geostrophic pressure bumps appearing in the gas controls particle concentration and thus the overall ability for planetesimals to form by self-gravity. The most important agent for driving gas dynamics is the magnetorotational instability (MRI, Balbus & Hawley 1991) which exhibits dynamical growth when the vertical component of the magnetic field is not too weak or too strong. The non-stratified MRI saturates to a state of non-linear subsonic fluctuations (e.g. Hawley et al. 1995). In this state there is an outward angular momentum flux through hydrodynamical Reynolds stresses ⟨ ρuxuy ⟩ and magnetohydrodynamical Maxwell stresses . In Fig. 3 we show the Maxwell and Reynolds stresses as a function of time. Using a mean vertical field of B0 = 0.0015 (corresponding to plasma-beta of β ≈ 9 × 105) the turbulent viscosity almost triples when going from 2563 to 5123 grid points. This is in stark contrast with zero net flux simulations that show decreasing turbulence with increasing resolution (Fromang & Papaloizou 2007). We interpret the behaviour of our simulations as an effect of underresolving the most unstable wavelength of the magnetorotational instability. Considering a vertical magnetic field of constant strength B0, the most unstable wave number of the MRI is (Balbus & Hawley 1991) (15)where \begin{formule}$v_{\rm A}=B_0/\sqrt{\mu_0 \rho_0}$\end{formule} is the Alfvén speed. The most unstable wavelength is λz = 2π/kz. For B0 = 0.0015 we get λz ≈ 0.01H. The resolution elements are δx ≈ 0.005H at 2563 and δx ≈ 0.0026H at 5123. Thus we get a significant improvement in the resolution of the most unstable wavelength when going from 2563 to 5123 grid points. Other authors (Simon et al. 2009; Yang et al. 2009) have reported a similar increase in turbulent activity of net-flux simulations with increasing resolution. Our simulations show that this increase persists up to at least β ≈ 9 × 105. The gas column density averaged over the azimuthal direction, as a function of radial coordinate x and time t in orbits. Large-scale pressure bumps appear with approximately 1% amplitude at both 2563 and 5123 resolution. The original choice of B0 = 0.0015 was made in J07 in order to prevent the turbulent viscosity from dropping below α = 0.001. However, we can not obtain the same turbulent viscosity (i.e. α ~ 0.001) at higher resolution, given the same B0. For this reason we did all 2563 experiments on particle dynamics and self-gravity with B0 = 0.003 (β ≈ 2 × 105), yielding approximately the same turbulent viscosity as in the high-resolution simulation. The Reynolds and Maxwell stresses can be translated into an average turbulent viscosity (following the notation of Brandenburg et al. 1995), Here and are the turbulent viscosities due to the velocity field and the magnetic field, respectively. We can further normalise the turbulent viscosities by the sound speed cs and gas scale height H (Shakura & Sunyaev 1973), (18)We thus find a turbulent viscosity of α ≈ 0.001, α ≈ 0.0022, and α ≈ 0.003 for runs M1, M2, and H, respectively. The combination of radial pressure support and two-way drag forces allows systematic relative motion between gas and particles, which is unstable to the streaming instability (Youdin & Goodman 2005; Youdin & Johansen 2007; Johansen & Youdin 2007; Miniati 2010; Bai & Stone 2010a,b). Streaming instabilities and magnetorotational instabilities can operate in concurrence (J07; Balsara et al. 2009; Tilley et al. 2010). However, we find that particles concentrate in high-pressure bumps forming due to the MRI, so that streaming instabilities are a secondary effect in the simulations. A necessity for the streaming instability to operate is a solids-to-gas ratio that is locally at least unity. The particle density in the mid-plane layer is reduced by turbulent diffusion (which is mostly caused by the MRI), so in this way an increase in the strength of MRI turbulence can reduce the importance of the SI. Even though streaming instabilities do not appear to be the main driver of particle concentration in our simulations, the back-reaction drag force of the particles on the gas can potentially play a role during the gravitational contraction phase where local particle column densities get very high. The high gas column density needed for gravitational collapse in the current paper may also in reality preclude activity by the magnetorotational instability, given the low ionisation level in the mid-plane, which would make the streaming instability the more likely path to clumping and planetesimal formation. 5.1. Pressure bumps An important feature of magnetorotational turbulence is the emergence of large-scale slowly overturning pressure bumps (Fromang & Nelson 2005; Johansen et al. 2006). Such pressure bumps form with a zonal flow envelope due to random excitation of the zonal flow by large-scale variations in the Maxwell stress (Johansen et al. 2009a). Variations in the mean field magnitude and direction has also been shown to lead to the formation of pressure bumps in the interface region between weak and strong turbulence (Kato et al. 2009, 2010). Pressure bumps can also be launched by a radial variation in resistivity, e.g. at the edges of dead zones (Lyra et al. 2008b; Dzyurkevich et al. 2010). Large particles – pebbles, rocks, and boulders – are attracted to the center of pressure bumps, because of the drag force associated with the sub-Keplerian/super-Keplerian zonal flow envelope. In presence of a mean radial pressure gradient the trapping zone is slightly downstream of the pressure bump, where there is a local maximum in the combined pressure. An efficient way to detect pressure bumps is to average the gas density field over the azimuthal and vertical directions. In Fig. 4 we show the gas column density in the 2563 and 5123 simulations averaged over the y-direction, as a function of time. Large-scale pressure bumps are clearly visible, with spatial correlation times of approximately 10–20 orbits. The pressure bump amplitude is around 1%, independent of both resolution and strength of the external field. Larger boxes have been shown to result in higher-amplitude and longer-lived pressure bumps (Johansen et al. 2009a). We limit ourselves in this paper to a relatively small box, where we can achieve high resolution of the gravitational collapse, but plan to model planetesimal formation in larger boxes in the future. The particle column density averaged over the azimuthal direction, as a function of radial coordinate x and time t in orbits. The starting time was chosen to be slightly prior to the emergence of a pressure bump (compare with left-most and right-most plots of Fig. 4). The particles concentrate slightly downstream of the gas pressure bump, with a maximum column density between three and four times the mean particle column density. The particles are between 40 and 80 cm in radius (i.e. boulders) for our adopted disc model. We release the particles at a time when the turbulence has saturated, but choose a time when there is no significant large-scale pressure bump present. Thus we choose t = 20Torb for the 2563 simulation and t = 32Torb for the 5123 simulation (see left-most and right-most plot of Fig. 4). In the particle simulations we always use a mean vertical field B0 = 0.003 at 2563 to get a turbulent viscosity more similar to 5123. The four friction time bins (Ωτf = 0.25,0.50,0.75,1.00) correspond to particle sizes between 40 and 80 cm. The particles immediately fall towards the mid-plane of the disc, before finding a balance between sedimentation and turbulent stirring. Figure 5 shows how the presence of gas pressure bumps has a dramatic influence on particle dynamics. The particles display column density concentrations of up to 4 times the average density just downstream of the pressure bumps. At this point the gas moves close to Keplerian, because the (positive) pressure gradient of the bump balances the (negative) radial pressure gradient there. The column density concentration is relatively independent of the resolution, as expected since the pressure bump amplitude is almost the same. 7. Self-gravity – moderate resolution In the moderate-resolution simulation (2563) we release particles and start self-gravity simultaneously at t = 20Torb. This is different from the approach taken in J07 where self-gravity was turned on after the particles had concentrated in a pressure bump. Thus we address concerns that the continuous self-gravity interaction of the particles would stir up the particle component and prevent the gravitational collapse. After releasing particles we continue the simulation for another thirteen orbits. Some representative particle column density snapshots are shown in Fig. 6. As time progresses the particle column density increases in high-pressure structures with power on length scales ranging from a few percent of a scale height to the full radial domain size. Self-gravity becomes important in these overdense regions, so some local regions begin to contract radially under their own weight, eventually reaching the Roche density and commencing a fully 2-D collapse into discrete clumps. The particle column density as a function of time after self-gravity is turned on at t = 20.0 Torb, for run M2 (2563 grid cells with 8 × 106 particles). Each gravitationally bound clump is labelled by its Hill mass in units of Ceres masses. The insert shows an enlargement of the region around the most massive bound clump. The most massive clump at the end of the simulation contains a total particle mass of 34.9 Ceres masses, partially as the result of a collision between a 13.0 and a 14.6 Ceres mass clump occurring at a time around t = 31.6 Torb. Temporal zoom in on the collision between three clumps (planetesimals) in the moderate resolution run M2. Two clumps with a radial separation approximately 0.05H shear past each other, bringing their Hill spheres in contact (first two panels). The clumps first pass through each other (panels three and four), but eventually merge (fifth panel). Finally a much lighter clump collides with the newly formed merger product (sixth panel). The particle column density as a function of time after self-gravity is turned on after t = 35.0Torb in the high-resolution simulation (run H with 5123 grid cells and 64 × 106 particles). Two clumps condense out within each other's Hill spheres and quickly merge. At the end of the simulation bound clumps have masses between 0.5 and 7.5 MCeres. The Hill sphere of each bound clump is indicated in Fig. 6, together with the mass of particles encompassed inside the Hill radius (in units of the mass of the dwarf planet Ceres). We calculate the Hill radius of clumps at a given time by searching for the point of highest particle column density in the x-y plane. We first consider a "circle" of radius one grid point and calculate the two terms relevant for determination of the Hill radius – the tidal term 3Ω2R and the gravitational acceleration GMp/R2 of a test particle at the edge of the Hill sphere due to the combined gravity of particles inside the Hill sphere. The mass Mp contained in a cylinder of radius R must fulfil (19)The natural constant G is set by the non-dimensional form of the Poisson equation, (20)Here . Using natural units for the simulation, with cs = Ω = H = ρ0 = 1, together with our choice of (21)we obtain an expression for the gravitational constant G. We then check the validity of the expression (22)where is the vertically averaged mass density at grid point (i,j) and δV is the volume of a grid cell. It is convenient to work with since this vertical average has been output regularly by the code during the simulation. The sum in Eq. (22) is taken over all grid points lying within the circle of radius R centred on the densest point. We continue to increase R in units of δx until the boundness criterion is no longer fulfilled. This defines the Hill radius R. Strictly speaking our method for finding the Hill radius is only valid if the particles were distributed in a spherically symmetric way. In reality particles are spread across the mid-plane with a scale height of approximately 0.04H. We nevertheless find by inspection that the calculated Hill radii correspond well to the regions where the particle flow appears dominated by the self-gravity rather than the Keplerian shear of the main flow and that the mass within the Hill sphere does not fluctuate because of the inclusion of non-bound particles. The particle-mesh Poisson solver based on FFTs can not consider the gravitational potential due to structure within a grid cell. From the perspective of the gravity solver the smallest radius of a bound object is thus the grid cell length δx. This puts a lower limit to the mass of bound structures, since the Hill radius can not be smaller than a grid cell, (23)This gives a minimum mass of (24)Using M ⋆ = 2.0 × 1033 g, H/r = 0.05 and δx = 0.0052H (2563) or δx = 0.0026 (5123), we get the minimum mass of Mmin ≈ 0.11MCeres at 2563 and Mmin ≈ 0.014MCeres at 5123. Less massive objects will inevitably be sheared out due to the gravity softening. Figure 6 shows that a number of discrete particle clumps condense out of the turbulent flow during the 13 orbits that are run with self-gravity. The most massive clump has the mass of approximately 35 Ceres masses at the end of the simulation, while four smaller clumps have masses between 2.4 and 4.6 Ceres masses. The smallest clumps are more than ten times more massive than the minimum resolvable mass. 7.1. Planetesimal collision The 35 Ceres masses particle clump visible in the last panel of Fig. 6 is partially the result of a collision between a 13.0 and a 14.6 Ceres mass clump at a time around t = 31.6 Torb. The collision is shown in greater detail in Fig. 7. The merging starts when two clumps with radial separation of approximately 0.05H shear past each other, bringing their Hill spheres in contact. The less massive clump passes first directly through the massive clump, appearing distorted on the other side, before merging completely. A third clump collides with the collision product shortly afterwards, adding another 3.5 Ceres masses to the clump. The particle-mesh self-gravity solver does not resolve particle self-gravity on scales smaller than a grid cell. The bound particle clumps therefore stop their contraction when the size of a grid cell is reached. This exaggerated size increases the collisional cross section of planetesimal artificially. The clumps behave aerodynamically like a collection of dm-sized particles, while a single dwarf planet sized would have a much longer friction time. Therefore the planetesimal collisions that we observe are not conclusive evidence of a collisionally evolving size distribution. Future convergence tests at extremely high resolution (10243 or higher), or mesh refinement around the clumps, will be needed to test the validity of the planetesimal mergers. The system is however not completely dominated by the discrete gravity sources. A significant population of free particles are present even after several bound clumps have formed. Those free particles can act like dynamical friction and thus allow close encounters to lead to merging or binary formation (Goldreich et al. 2002). In the high-resolution simulation presented in the next section we find clear evidence of a trailing spiral density structure that is involved in the collision between two planetesimals. 8. Self-gravity – high resolution The total bound mass and the mass in the most massive gravitationally bound clump as a function of time. The left panel shows the result of the moderate-resolution simulation (M2). Around a time of 30 Torb there is a condensation event that transfers many particles to bound clumps. Two orbits later, at 32 Torb, the two most massive clumps collide and merge. The right panel shows the high-resolution simulation (H). The total amount of condensed material is comparable to M2, but the mass of the most massive clump is smaller. This may be a result either of increased resolution in the self-gravity solver or of the limited time span of the high-resolution simulation. The total particle mass for both resolutions is Mtot ≈ 460 MCeres. Only around 10% of the mass is converted into planetesimals during the time-span of the simulations. In Sect. 6 we showed that particle concentration is maintained when going from 2563 to 5123 grid cells. In this section we show that the inclusion of self-gravity at high resolution leads to rapid formation of bound clumps similar to what we observe at moderate resolution. Given the relatively high computational cost of self-gravity simulations we start the self-gravity at t = 35 Torb in the 5123 simulation, three orbits after inserting the particles. The evolution of particle column density is shown in Fig. 8. Due to the smaller grid size bound particle clumps appear visually smaller than in the 2563 simulation. The increased compactness of the planetesimals can potentially decrease the probability for planetesimal collisions (Sect. 7.1), which makes it important to do convergence tests. The high-resolution simulation proceeds much as the moderate-resolution simulation. Panels 1 and 2 of Fig. 8 show how overdense bands of particles contract radially under their own gravity. The increased resolution of the self-gravity solver allows for a number of smaller planetesimals to condense out as the bands reach the local Roche density at smaller radial scales (panel 3). Two of the clumps are born within each other's Hill spheres. They merge shortly after into a single clump (panel 5). This clump has grown to 7.5 MCeres at the end of the simulation, which is the most massive clump in a population of clumps with masses between 0.5 and 7.5 MCeres. Although we do not reach the same time span as in the low resolution simulation, we do observe two bound clumps colliding. However, the situation is different, since the colliding clumps form very close to each other and merge almost immediately. An interesting aspect is the presence of a particle density structure trailing the less massive clump. The gravitational torque from this structure can play an important role in the collision between the two clumps, since the clumps initially appear to orbit each other progradely. This confirms the trend observed in Johansen & Lacerda (2010) for particles to be accreted with prograde angular momentum in the presence of drag forces, which can explain why the largest asteroids tend to spin in the prograde direction2. The gravitational torque from the trailing density structure would be negative in that case and cause the relative planetesimal orbit to shrink. 8.1. Clump masses In Fig. 9 we show the total mass in bound clumps as a function of time. Finding the physical mass of a clump requires knowledge of the scale height H of the gas, as that sets the physical unit of length. The self-gravity solver in itself only needs to know , which is effectively a combination of density and length scale. When quoting the physical mass we assume orbital distance r = 5 AU and aspect ratio H/r = 0.05. The total mass in particles in the box is Mtot = 0.01ΣgasLxLy ≈ 460 MCeres, with and Σgas = 1800 g cm-2 from Eq. (11). In both simulations approximately 50 MCeres of particles are present in bound clumps at the end of the simulation. However, self-gravity was turned on and sustained for very different times in the two different simulations. In the moderate-resolution simulation most mass is bound in a single clump at the end of the simulation. The merger event discussed in Section 7.1 is clearly visible around t = 32 Torb. Figure 10 shows histograms of the clump masses for moderate resolution (left panel) and high resolution (right panel). At moderate resolution initially only a single clump forms, but seven orbits later there are 5 bound clumps, all of several Ceres masses. The high-resolution run produces many more small clumps in the initial planetesimal formation burst. This is likely an effect of the "hot start" initial condition where we turn on gravity during a concentration event as the high particle density allows smaller regions to undergo collapse. Histogram of clump masses after first production of bound clumps and at the end of the simulation. At moderate resolution (left panel) only a single clump condenses out initially, but seven orbits later there are five clumps, including the 30+ MCeres object formed by merging. At high resolution (right panel) the initial planetesimal burst leads to the formation of many sub-Ceres-mass clumps. The most massive clump is similar to what forms initially in the moderate-resolution run. In this paper we present (a) the first 5123 grid cell simulation of dust dynamics in turbulence driven by the magnetorotational instability and (b) a long time integration of the system performed at 2563 grid cells. Perhaps the most important finding is that large-scale pressure bumps and zonal flows in the gas appear relatively independent of the resolution. The same is true for particle concentration in these pressure bumps. While saturation properties of MRI turbulence depend on the implicit or explicit viscosity and resistivity (Lesur & Longaretti 2007; Fromang & Papaloizou 2007; Davis et al. 2010), the emergence of large-scale zonal flows appears relatively independent of resolution (this work, Johansen et al. 2009a) and numerical scheme (Fromang & Stone 2009; Stone & Gardiner 2010). Particle concentration in pressure bumps can have profound influence on particle coagulation by supplying an environment with high densities and low radial drift speeds (Brauer et al. 2008b), and on formation of planetesimals by gravitational contraction and collapse of overdense regions (this work; J07). A direct comparison between the moderate-resolution and the high-resolution simulation is more difficult after self-gravity is turned on. The appearance of gravitationally bound clumps is inherently stochastic, as is the amplitude and phase of the first pressure bump to appear. The comparison is furthermore complicated by the extreme computational cost of the high-resolution simulation, which allowed us to evolve the system only for a few orbits after self-gravity is turned on. A significant improvement over J07 is that the moderate-resolution simulation could be run for much longer time. Therefore we were able to start self-gravity shortly after MRI turbulence had saturated, and to follow the system for more than ten orbits. In J07 self-gravity was turned on during a concentration event, precluding the concurrent evolution of self-gravity and concentration. This "hot start" may artificially increase the number of the forming planetesimals. The "cold start" initial conditions presented here lead to a more gradual formation of planetesimals over more than ten orbits. Still the most massive bound clump had grown to approximately 35 MCeres at the end of the simulation and was still gradually growing. The high-resolution simulation was given a "hot start", to focus computational resources on the most interesting part of the evolution. As expected these initial conditions allow a much higher number of smaller planetesimals to condense out of the flow. The most massive planetesimal at the end of the high-resolution simulation contained 7.5 MCeres of particles, but this "planetesimal" is accompanied by a number of bound clumps with masses from 0.5 to 4.5 MCeres. The first clumps to condense out is on the order of a few Ceres masses in both the moderate-resolution simulation and the high-resolution simulation. The high-resolution simulation produced additionally a number of sub-Ceres-mass clumps. It thus appears that the higher-resolution simulation samples the initial clump function down to smaller masses. This observation strongly supports the use of simulations of even higher resolution in order to study the broader spectrum of clump masses. Higher resolution will also allow studying simultaneously the concentration of smaller particles in smaller eddies and their role in the planetesimal formation process (Cuzzi et al. 2010). We emphasize here the difference between the initial mass function of gravitationally bound clumps and of planetesimals. The first can be studied in the simulations that we present in this paper, while the actual masses and radii of planetesimal that form in the bound clumps will require the inclusion of particle shattering and particle sticking. Zsom & Dullemond (2008) and Zsom et al. (2010) used a representative particle approach to model interaction between superparticles in a 0-D particle-in-box approach, based on a compilation of laboratory results for the outcome of collisions depending on particle sizes, composition and relative speed (Güttler et al. 2010). We plan to implement this particle interaction scheme in the Pencil Code and perform simulations that include the concurrent action of hydrodynamical clumping and particle interaction. This approach will ultimately be needed to determine whether each clump forms a single massive planetesimal or several planetesimals of lower mass. At both moderate and high resolution we observe the close approach and merging of gravitationally bound clumps. Concerns remain about whether these collisions are real, since our particle-mesh self-gravity algorithm prevents bound clumps from being smaller than a grid cell. Thus the collisional cross section is artificially large. Two observations nevertheless indicate that the collisions can be real: we observe planetesimal mergers at both moderate and high resolution and we see that the environment in which planetesimals merge is rich in unbound particles. Dynamical friction may thus play an important dissipative role in the dynamics and the merging. At high resolution we clearly see a trailing spiral arm exerting a negative torque on a planetesimal born in the vicinity of another planetesimal. If the transport of newly born planetesimals into each other's Hill spheres is physical (i.e. moderated by dynamical friction rather than artificial enlargement of planetesimals and numerical viscosity), then that can lead to both mergers and production of binaries. Nesvorný et al. (2010) recently showed that gravitationally contracting clumps of particles can form wide separation binaries for a range of initial masses and clump rotations and that the properties of the binary orbits are in good agreement with observed Kuiper belt object binaries. In future simulations strongly bound clusters of particles should be turned into single gravitating sink particles, in order to prevent planetesimals from having artificially large sizes. In the present paper we decided to avoid using sink particles because we wanted to evolve the system in its purest state with as few assumptions as possible. The disadvantage is that the particle clumps become difficult to evolve numerically and hard to load balance. Using sink particles will thus also allow a longer time evolution of the system and use of proper friction times of large bodies. The measured α-value of MRI turbulence at 5123 is α ≈ 0.003. At a sound speed of cs = 500 m/s, the expected collision speed of marginally coupled m-sized boulders, based empirically on the measurements of J07, is 25–30 m/s. J07 showed that the actual collision speeds can be a factor of a few lower, because the particle layer damps MRI turbulence locally. In general boulders are expected to shatter when they collide at 10 m/s or higher (Benz 2000). Much larger km-sized bodies are equally prone to fragmentation as random gravitational torques exerted by the turbulent gas excite relative speeds higher than the gravitational escape speed (Ida et al. 2008; Leinhardt & Stewart 2009). A good environment for building planetesimals from boulders may require α ≲ 0.001, as in J07. Johansen et al. (2009b) presented simulations with no MRI turbulence where turbulence and particle clumping is driven by the streaming instability (Youdin & Goodman 2005). They found typical collision speeds as low as a few meters per second. A second reason to prefer weak turbulence is the amount of mass available in the disc. If we apply our results to r = 5 AU, then our dimensionless gravity parameter corresponds to a gas column density of Σgas ≈ 1800 g cm-2, more than ten times higher than the Minimum Mass Solar Nebula (Weidenschilling 1977b; Hayashi 1981). Turbulence driven by streaming and Kelvin-Helmholtz instabilities can form planetesimals for column densities comparable to the Minimum Mass Solar Nebula (Johansen et al. 2009b). The saturation of the magnetorotational instability is influenced by both the mean magnetic field and small scale dissipation parameters, and the actual saturation level in protoplanetary discs is still unknown. Our results show that planetesimal formation by clumping and self-gravity benefits overall from weaker MRI turbulence with α ≲ 0.001. Future improvements in our understanding of protoplanetary disc turbulence will be needed to explore whether such a relatively low level of MRI turbulence is the case in the entire disc or only in smaller regions where the resistivity is high or the mean magnetic field is weak. See http://code.google.com/p/pencil-code/. Prograde rotation is not readily acquired in standard numerical models where planetesimals accumulate in a gas-free environment, although planetary birth in rotating self-gravitating gas blobs was recently been put forward to explain the prograde rotation of planets (Nayakshin 2011). This project was made possible through a computing grant for five rack months at the Jugene BlueGene/P supercomputer at Jülich Supercomputing Centre. Each rack contains 4096 cores, giving a total computing grant of approximately 15 million core hours. A.J. was supported by the Netherlands Organization for Scientific Research (NWO) through Veni grant 639.041.922 and Vidi grant 639.042.607 during part of the project. We are grateful to Tristen Hayfield for discussions on particle load balancing and to Xuening Bai and Andrew Youdin for helpful comments. The anonymous referee is thanked for an insightful referee report. H.K. has been supported in part by the Deutsche Forschungsgemeinschaft DFG through grant DFG Forschergruppe 759 "The Formation of Planets: The Critical First Growth Phase". Bai, X.-N., & Stone, J. M. 2010a, ApJS, 190, 297 [NASA ADS] [CrossRef] [Google Scholar] Bai, X.-N., & Stone, J. M. 2010b, ApJ, 722, 1437 [NASA ADS] [CrossRef] [Google Scholar] Bai, X.-N., & Stone, J. M. 2010c, ApJ, 722, L220 [NASA ADS] [CrossRef] [Google Scholar] Balbus, S. A., & Hawley, J. F. 1991, ApJ, 376, 21 [NASA ADS] [CrossRef] [Google Scholar] Balsara, D. S., Tilley, D. A., Rettig, T., & Brittain, S. D. 2009, MNRAS, 397, 24 [NASA ADS] [CrossRef] [Google Scholar] Benz, W. 2000, Space Sci. Rev., 92, 279 [NASA ADS] [CrossRef] [Google Scholar] Birnstiel, T., Dullemond, C. P., & Brauer, F. 2009, A&A, 503, L5 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Blum, J., & Muench, M. 1993, Icarus, 106, 151 [NASA ADS] [CrossRef] [Google Scholar] Blum, J., & Wurm, G. 2008, ARA&A, 46, 21 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Brandenburg, A. 2003, in Advances in nonlinear dynamos, The Fluid Mechanics of Astrophysics and Geophysics, ed. A. Ferriz-Mas, & M. Núñez (London and New York: Taylor & Francis), 9, 269 [arXiv:astro-ph/0109497] [Google Scholar] Brandenburg, A., Nordlund, Å., Stein, R.F., & Torkelsson, U. 1995, ApJ, 446, 741 [NASA ADS] [CrossRef] [Google Scholar] Brauer, F., Dullemond, C. P., & Henning, Th. 2008a, A&A, 480, 859 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Brauer, F., Henning, Th., & Dullemond, C. P. 2008b, A&A, 487, L1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Cuzzi, J. N., Hogan, R. C., & Shariff, K. 2008, ApJ, 687, 1432 [NASA ADS] [CrossRef] [Google Scholar] Cuzzi, J. N., Hogan, R. C., & Bottke, W. F. 2010, Icarus, 208, 518 [NASA ADS] [CrossRef] [Google Scholar] Davis, S. W., Stone, J. M., & Pessah, M. E. 2010, ApJ, 713, 52 [NASA ADS] [CrossRef] [Google Scholar] Dullemond, C. P., & Dominik, C. 2005, A&A, 434, 971 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Dzyurkevich, N., Flock, M., Turner, N. J., Klahr, H., & Henning, Th. 2010, A&A, 515, A70 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Fromang, S., & Nelson, R. P. 2005, MNRAS, 364, L81 [NASA ADS] [CrossRef] [Google Scholar] Fromang, S., & Papaloizou, J. 2007, A&A, 476, 1113 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Fromang, S., & Stone, J. M. 2009, A&A, 507, 19 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Fromang, S., Terquem, C., & Balbus, S. A. 2002, MNRAS, 329, 18 [NASA ADS] [CrossRef] [Google Scholar] Goldreich, P., & Ward, W. R. 1972, ApJ, 183, 1051 [NASA ADS] [CrossRef] [Google Scholar] Goldreich, P., Lithwick, Y., & Sari, R. 2002, Nature, 420, 643 [NASA ADS] [CrossRef] [PubMed] [Google Scholar] Güttler, C., Blum, J., Zsom, A., Ormel, C. W., & Dullemond, C. P. 2010, A&A, 513, A56 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Hawley, J. F., Gammie, C. F., & Balbus, S. A. 1995, ApJ, 440, 742 [NASA ADS] [CrossRef] [Google Scholar] Hayashi, C. 1981, Progress of Theoretical Physics Supplement, 70, 35 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar] Hockney, R. W., & Eastwood, J. W. 1981, Computer Simulation Using Particles (New York: McGraw-Hill) [Google Scholar] Ida, S., Guillot, T., & Morbidelli, A. 2008, ApJ, 686, 1292 [NASA ADS] [CrossRef] [Google Scholar] Johansen, A., & Lacerda, P. 2010, MNRAS, 404, 475 [NASA ADS] [Google Scholar] Johansen, A., & Youdin, A. 2007, ApJ, 662, 627 [NASA ADS] [CrossRef] [Google Scholar] Johansen, A., Klahr, H., & Henning, Th. 2006, ApJ, 636, 1121 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar] Johansen, A., Oishi, J. S., Low, M., et al. 2007, Nature, 448, 1022 [NASA ADS] [CrossRef] [PubMed] [Google Scholar] Johansen, A., Brauer, F., Dullemond, C., Klahr, H., & Henning, Th. 2008, A&A, 486, 597 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Johansen, A., Youdin, A., & Klahr, H. 2009a, ApJ, 697, 1269 [NASA ADS] [CrossRef] [Google Scholar] Johansen, A., Youdin, A., & Mac Low, M.-M. 2009b, ApJ, 704, L75 [NASA ADS] [CrossRef] [Google Scholar] Kato, M. T., Nakamura, K., Tandokoro, R., Fujimoto, M., & Ida, S. 2009, ApJ, 691, 1697 [NASA ADS] [CrossRef] [Google Scholar] Kato, M. T., Fujimoto, M., & Ida, S. 2010, ApJ, 714, 1155 [NASA ADS] [CrossRef] [Google Scholar] Klahr, H. H., & Henning, Th. 1997, Icarus, 128, 213 [NASA ADS] [CrossRef] [Google Scholar] Leinhardt, Z. M., & Stewart, S. T. 2009, Icarus, 199, 542 [NASA ADS] [CrossRef] [Google Scholar] Lesur, G., & Longaretti, P.-Y. 2007, MNRAS, 378, 1471 [NASA ADS] [CrossRef] [Google Scholar] Lesur, G., & Longaretti, P.-Y. 2011, A&A, 528, A17 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Lithwick, Y., & Chiang, E. 2007, ApJ, 656, 524 [NASA ADS] [CrossRef] [Google Scholar] Lyra, W., Johansen, A., Klahr, H., & Piskunov, N. 2008a, A&A, 479, 883 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Lyra, W., Johansen, A., Klahr, H., & Piskunov, N. 2008b, A&A, 491, L41 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Miniati, F. 2010, J. Comput. Phys., 229, 3916 [NASA ADS] [CrossRef] [Google Scholar] Morbidelli, A., Bottke, W. F., Nesvorný, D., & Levison, H. F. 2009, Icarus, 204, 558 [NASA ADS] [CrossRef] [Google Scholar] Nayakshin, S. 2011, MNRAS, 410, L1 [NASA ADS] [CrossRef] [Google Scholar] Nesvorný, D., Youdin, A. N., & Richardson, D. C. 2010, AJ, 140, 785 [NASA ADS] [CrossRef] [Google Scholar] Poppe, T., Blum, J., & Henning, Th. 2000, ApJ, 533, 454 [NASA ADS] [CrossRef] [Google Scholar] Rein, H., Lesur, G., & Leinhardt, Z. M. 2010, A&A, 511, A69 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Safronov, V. S. 1960, Annales d'Astrophysique, 23, 979 [NASA ADS] [Google Scholar] Safronov, V. S. 1969, Evoliutsiia doplanetnogo oblaka (English transl.: Evolution of the Protoplanetary Cloud and Formation of Earth and the Planets, NASA Tech. Transl. F-677, Jerusalem: Israel Sci. Transl. 1972) [Google Scholar] Sano, T., Miyama, S. M., Umebayashi, T., & Nakano, T. 2000, ApJ, 543, 486 [NASA ADS] [CrossRef] [Google Scholar] Schräpler, R., & Henning, Th. 2004, ApJ, 614, 960 [NASA ADS] [CrossRef] [Google Scholar] Sekiya, M. 1998, Icarus, 133, 298 [CrossRef] [Google Scholar] Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 [NASA ADS] [Google Scholar] Sheppard, S. S., & Trujillo, C. A. 2010, ApJ, 723, L233 [NASA ADS] [CrossRef] [Google Scholar] Simon, J. B., Hawley, J. F., & Beckwith, K. 2009, ApJ, 690, 974 [NASA ADS] [CrossRef] [Google Scholar] Stone, J. M., & Gardiner, T. A. 2010, ApJS, 189, 142 [NASA ADS] [CrossRef] [Google Scholar] Toomre, A. 1964, ApJ, 139, 1217 [NASA ADS] [CrossRef] [Google Scholar] Wada, K., Tanaka, H., Suyama, T., Kimura, H., & Yamamoto, T. 2009, ApJ, 702, 1490 [NASA ADS] [CrossRef] [Google Scholar] Tilley, D. A., Balsara, D. S., Brittain, S. D., & Rettig, T. 2010, MNRAS, 403, 211 [NASA ADS] [CrossRef] [Google Scholar] Yang, C.-C., Mac Low, M.-M., & Menou, K. 2009, ApJ, 707, 1233 [NASA ADS] [CrossRef] [Google Scholar] Youdin, A. 2010, in Proc. Les Houches Winter School: Physics and Astrophysics of Planetary Systems, Chamonix, France, 2008 (EDP Sciences), EAS Publ. Ser., 41, 187 [CrossRef] [EDP Sciences] [Google Scholar] Youdin, A., & Goodman, J. 2005, ApJ, 620, 459 [NASA ADS] [CrossRef] [Google Scholar] Youdin, A., & Johansen, A. 2007, ApJ, 662, 613 [NASA ADS] [CrossRef] [Google Scholar] Youdin, A., & Shu, F. H. 2002, ApJ, 580, 494 [NASA ADS] [CrossRef] [Google Scholar] Weidenschilling, S. J. 1977a, MNRAS, 180, 57 [NASA ADS] [CrossRef] [Google Scholar] Weidenschilling, S. J. 1977b, Ap&SS, 51, 153 [NASA ADS] [CrossRef] [Google Scholar] Weidenschilling, S. J. 1984, Icarus, 60, 553 [NASA ADS] [CrossRef] [Google Scholar] Weidenschilling, S. J. 1995, Icarus, 116, 433 [NASA ADS] [CrossRef] [Google Scholar] Weidenschilling, S. J., & Cuzzi, J. N. 1993, in Protostars and Planets III, 1031 [Google Scholar] Wurm, G., Paraskov, G., & Krauss, O. 2005, Icarus, 178, 253 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar] Zsom, A., & Dullemond, C. P. 2008, A&A, 489, 931 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Zsom, A., Ormel, C. W., Güttler, C., Blum, J., & Dullemond, C. P. 2010, A&A, 513, A57 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] Adding particle collisions to the formation of asteroids and Kuiper belt objects via streaming instabilities Rapid growth of gas-giant cores by pebble accretion Dust and gas density evolution at a radial pressure bump in protoplanetary disks How to form planetesimals from mm-sized chondrules and chondrule aggregates Can dust coagulation trigger streaming instability?
CommonCrawl
Zh. Vychisl. Mat. Mat. Fiz., 2017, Volume 57, Number 5, Pages 814–831 (Mi zvmmf10572) Computer difference scheme for a singularly perturbed elliptic convection-diffusion equation in the presence of perturbations G. I. Shishkin Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences, Ekaterinburg Abstract: A grid approximation of a boundary value problem for a singularly perturbed elliptic convection-diffusion equation with a perturbation parameter $\varepsilon$, $\varepsilon\in(0, 1]$, multiplying the highest order derivatives is considered on a rectangle. The stability of a standard difference scheme based on monotone approximations of the problem on a uniform grid is analyzed, and the behavior of discrete solutions in the presence of perturbations is examined. With an increase in the number of grid nodes, this scheme does not converge $\varepsilon$-uniformly in the maximum norm, but only conditional convergence takes place. When the solution of the difference scheme converges, which occurs if $N_1^{-1}N_2^{-1}\ll\varepsilon$, where $N_1$ and $N_2$ are the numbers of grid intervals in $x$ and $y$, respectively, the scheme is not -uniformly well-conditioned or $\varepsilon$-uniformly stable to data perturbations in the grid problem and to computer perturbations. For the standard difference scheme in the presence of data perturbations in the grid problem and/or computer perturbations, conditions imposed on the "parameters" of the difference scheme and of the computer (namely, on $\varepsilon$, $N_1$, $N_2$, admissible data perturbations in the grid problem, and admissible computer perturbations) are obtained that ensure the convergence of the perturbed solutions as $N_1$, $N_2\to\infty$, $\varepsilon\in(0, 1]$. The difference schemes constructed in the presence of the indicated perturbations that converges as $N_1$, $N_2\to\infty$ for fixed $\varepsilon$, $\varepsilon\in(0, 1]$, is called a computer difference scheme. Schemes converging $\varepsilon$-uniformly and conditionally converging computer schemes are referred to as reliable schemes. Conditions on the data perturbations in the standard difference scheme and on computer perturbations are also obtained under which the convergence rate of the solution to the computer difference scheme has the same order as the solution of the standard difference scheme in the absence of perturbations. Due to this property of its solutions, the computer difference scheme can be effectively used in practical computations. Key words: singularly perturbed boundary value problem, elliptic convection-diffusion equation, boundary layer, standard difference scheme on uniform meshes, perturbations of data of the grid problem, computer perturbations, maximum norm, stability of schemes to perturbations, conditioning of schemes, computer scheme, reliable difference scheme. Funding Agency Grant Number Russian Foundation for Basic Research 16-01-00727_а DOI: https://doi.org/10.7868/S004446691705012X Computational Mathematics and Mathematical Physics, 2017, 57:5, 815–832 Citation: G. I. Shishkin, "Computer difference scheme for a singularly perturbed elliptic convection-diffusion equation in the presence of perturbations", Zh. Vychisl. Mat. Mat. Fiz., 57:5 (2017), 814–831; Comput. Math. Math. Phys., 57:5 (2017), 815–832 \Bibitem{Shi17} \by G.~I.~Shishkin \paper Computer difference scheme for a singularly perturbed elliptic convection-diffusion equation in the presence of perturbations \mathnet{http://mi.mathnet.ru/zvmmf10572} \crossref{https://doi.org/10.7868/S004446691705012X} http://mi.mathnet.ru/eng/zvmmf10572 http://mi.mathnet.ru/eng/zvmmf/v57/i5/p814
CommonCrawl
Unveiling the key factor for the phase reconstruction and exsolved metallic particle distribution in perovskites Hyunmin Kim1 na1, Chaesung Lim2 na1, Ohhun Kwon3 na1, Jinkyung Oh1, Matthew T. Curnan2, Hu Young Jeong ORCID: orcid.org/0000-0002-5550-52984, Sihyuk Choi5, Jeong Woo Han ORCID: orcid.org/0000-0001-5676-58442 & Guntae Kim ORCID: orcid.org/0000-0002-5167-09821 To significantly increase the amount of exsolved particles, the complete phase reconstruction from simple perovskite to Ruddlesden-Popper (R-P) perovskite is greatly desirable. However, a comprehensive understanding of key parameters affecting the phase reconstruction to R-P perovskite is still unexplored. Herein, we propose the Gibbs free energy for oxygen vacancy formation in Pr0.5(Ba/Sr)0.5TO3-δ (T = Mn, Fe, Co, and Ni) as the important factor in determining the type of phase reconstruction. Furthermore, using in-situ temperature & environment-controlled X-ray diffraction measurements, we report the phase diagram and optimum 'x' range required for the complete phase reconstruction to R-P perovskite in Pr0.5Ba0.5-xSrxFeO3-δ system. Among the Pr0.5Ba0.5-xSrxFeO3-δ, (Pr0.5Ba0.2Sr0.3)2FeO4+δ – Fe metal demonstrates the smallest size of exsolved Fe metal particles when the phase reconstruction occurs under reducing condition. The exsolved nano-Fe metal particles exhibit high particle density and are well-distributed on the perovskite surface, showing great catalytic activity in fuel cell and syngas production. Tailoring the functionality of perovskite oxides (ABO3) by decorating the surface with catalytically active particles plays an important role in energy-related applications such as fuel cells, electrolysis cells, metal-air batteries, and supercapacitors1,2,3,4,5,6. The catalyst particles are typically prepared by deposition techniques (e.g. infiltration, chemical vapor deposition, and pulsed laser deposition), in which the catalysts are embedded onto the surface from external precursors7,8,9. However, these techniques require redundant heat-treatments for preparation and the catalyst particles suffer from agglomeration and/or coarsening over time, resulting in performance degradation10,11. In this respect, it is of great importance to develop more robust and time-efficient catalyst preparation method. Exsolution phenomenon on the basis of in-situ growth of metal particles has been suggested as an advanced approach to designing perovskite matrix with electro-catalytically active particles12,13. In this approach, catalytically-active metal elements (e.g. Pd, Ru, Co, Ni, and Fe, etc…) are initially incorporated into the B-site of perovskite oxides, and then exsolved as metallic particles from the perovskite support under reducing atmosphere14,15. As compared with the conventional catalyst preparation methods, the in-situ exsolution process provides benefits of time-efficient catalyst preparation, enhanced catalyst lifetime, and robust thermal stability16,17. Notwithstanding the advantages, two major thresholds hinder the practical application of the exsolution process: (i) restricted diffusion of catalytically active cations to the surface due to preferential segregation within the bulk18, and (ii) structural destruction and/or insulating phase evolution after excessive cation defect formation19. In order to address the challenges of the exsolution phenomenon, A-site deficient perovskites (A/B < 1) has been extensively employed as an attractive methodology2,20,21,22. In A-site deficient perovskites, formation of oxygen vacancies is promoted by phase stabilization from non-stoichiometric perovskite to defect-free perovskite under reducing condition, facilitating the B-site exsolution23,24. Hence, the B-site exsolution level is proportional to A-site deficiency range ('α ' for A1-αBO3−δ). Meanwhile, there exists restriction in the variation of A-site deficiency range (about 0 < α < 0.2 for A1-αBO3−δ) because excessive A-site deficiency may be accompanied by formation of undesirable A-site oxide phases19,25. Given these aspects, an alternative corresponding method to further trigger the B-site exsolution is using the in-situ phase reconstruction from simple perovskite to Ruddlesden-Popper (R-P) perovskite oxides (An+1BnO3n+1 with n = 1, 2, and 3) via reduction process26,27. This strategy facilitates abundant formation of oxygen vacancies during the phase reconstruction, breaking the bottleneck of exsolution capability. $${{{{{{\rm{ABO}}}}}}}_{3}\mathop{\longrightarrow }\limits_{{{{{{\rm{After}}}}}}\,{{{{{\rm{reduction}}}}}}}\frac{1}{2}{{{{{{\rm{A}}}}}}}_{2}{{{{{{\rm{BO}}}}}}}_{4}+\frac{1}{2}{{{{{\rm{B}}}}}}+\frac{1}{2}{{{{{{\rm{O}}}}}}}_{2}$$ From Eq. 1, it is presumable that considerable number of cations at B-site will be reduced into metals without A-site segregation after phase reconstruction from simple perovskite (ABO3) to n = 1 R-P perovskite (A2BO4). Although several perovskites have exhibited superior distribution of catalyst particles on the surface via phase transition to R-P perovskite28,29,30, the comprehensive understanding of key factors modulating the phase reconstruction to R-P perovskite is still an open question. Inspired by the above perspectives, the goal of this study is to identify the significant factor contributing to the phase reconstruction from simple perovskite to R-P perovskite. Here, we systematically report the Gibbs free energy for oxygen vacancy formation (Gvf-O) of perovskite oxides with various cations as the unprecedented factor affecting the phase reconstruction. The type of phase reconstruction could be predicted with the Gvf-O value from PrO and TO2 networks in Pr0.5(Ba/Sr)0.5TO3−δ (T = Mn, Fe, Co, and Ni), in which the most appropriate cations for the complete reconstruction to R-P perovskite are determined. Afterwards, the phase diagram from in-situ temperature and environment-controlled X-ray diffraction (XRD) measurements reveals the phase reconstruction tendency of Pr0.5Ba0.5−xSrxFeO3−δ (x = 0, 0.1, 0.2, 0.3, 0.4, and 0.5, abbreviated as PBSF in Supplementary Table 1) materials with respect to 'x' value and reduction temperature. Furthermore, the as-exsolved Fe metal particle size and distribution for PBSF after reduction process are observed from microscopy analysis. In accordance with the theoretical calculations and experimental data, Pr0.5Ba0.2Sr0.3FeO3−δ (A-PBSF30) is adopted as the optimized electrode material for symmetrical solid oxide cell (S-SOC) and demonstrates exceptional electrochemical performance (1.23 W cm−2 at 800 °C under fuel cell mode and −1.62 A cm−2 at 800 °C under co-electrolysis mode). Density functional theory calculations The complete phase reconstruction from simple perovskite (ABO3) to R-P perovskite (A2BO4) via reduction is considered as one of the efficient strategies to significantly boost the population of exsolved particles. However, the key factors contributing to the phase reconstruction to R-P perovskite has not been investigated. To determine the unexplored factor for the phase reconstruction for the first time, the Gibbs free energy for oxygen vacancy formation (Gvf-O) and the oxygen vacancy formation energies (Evf-O) from the surface AO (A-site) and BO2 (B-site) networks were calculated for Pr0.5Ba0.5TO3−δ and Pr0.5Sr0.5TO3−δ (T = Mn, Fe, Co, and Ni) perovskite oxides (Fig. 1 and Supplementary Fig. 1)31,32,33,34. For the perovskite oxides to undergo phase reconstruction without phase decomposition under reducing condition, the A-site Gvf-O value should be positive (A-site Gvf-O > 0 eV). Moreover, the B-site Gvf-O value would be an important factor for determining the type of phase reconstruction. For instance, the B-site Gvf-O should be in the range of about −1.2 to 0 eV (−1.2 eV < B-site Gvf-O < 0 eV) to demonstrate complete phase reconstruction to R-P perovskite in the reduction environment. Considering the aforementioned results and the experimental data, only Pr0.5Sr0.5MnO3−δ (PSM) and Pr0.5Sr0.5FeO3−δ (A-PBSF50) are the possible candidates for the complete phase reconstruction to R-P perovskite in this study (Supplementary Fig. 2). Among the two potential candidates, we adopted Fe cation as the more suitable B-site cation because of its much superior catalytic activity for fuel oxidation reaction rather than Mn cation18. Accordingly, we systematically analyzed the phase reconstruction tendency of Pr0.5Ba0.5−xSrxFeO3−δ (x = 0, 0.1, 0.2, 0.3, 0.4, and 0.5, abbreviated as PBSF in Supplementary Table 1) materials with respect to different Ba2+/Sr2+ ratio. Fig. 1: Density functional theory calculations. a Calculated Gibbs free energy for oxygen vacancy formation (Gvf-O) of Pr0.5(Ba/Sr)0.5TO3−δ (T = Mn, Fe, Co, and Ni) from the surface AO (green bar) and BO2 (purple bar) networks and the predicted phase change under reducing condition. b Schematic illustration of the most stable structure configurations of Pr0.5(Ba/Sr)0.5TO3−δ (T = Mn, Fe, Co, and Ni) slab models used for the calculations of oxygen vacancy formation energy values from AO and BO2 networks. Structural characterization Before examining the phase reconstruction tendency of PBSF, the crystalline structures after heat-treated in two different environmental conditions were analyzed by X-ray diffraction (XRD) and Rietveld refinement profiles (Supplementary Figs. 3, 4 and Supplementary Table 2). The air-sintered PBSF are all corresponded to simple perovskite structure without detectable secondary phases. On the other hand, after reduction in H2 atmosphere, the PBSF samples were surprisingly changed to different types of phases depending on the Sr2+ concentration. As shown in Supplementary Fig. 4b, Pr0.5Ba0.5FeO3−δ (A-PBSF00), Pr0.5Ba0.2Sr0.3FeO3−δ (A-PBSF30), and Pr0.5Sr0.5FeO3−δ (A-PBSF50) were changed to Pr0.5Ba0.5FeO3−δ – Fe metal & Pr oxide (R-PBSF00), (Pr0.5Ba0.2Sr0.3)2FeO4+δ – Fe metal (R-PBSF30), and (Pr0.5Sr0.5)2FeO4+δ – Fe metal (R-PBSF50), respectively. Only catalytically active Fe metal peaks along with complete phase reconstruction to R-P perovskite are observed for R-PBSF30 and R-PBSF50, while R-PBSF00 shows Fe metal and Pr oxide segregation without phase reconstruction under reducing condition. Based on the further Rietveld refinement analysis in Supplementary Fig. 5, R-PBSF30 clearly exhibits the complete phase reconstruction to R-P perovskite with tetragonal structure (space group I4/mmm with lattice parameters of a = b = 3.879 and c = 12.704 Å). The complete phase reconstruction could be also described by Eq. (2), of which considerable amounts of Fe metal are expected to be exsolved in the reduction environment. $${{{\Pr }}}_{0.5}{{{{{{\rm{Ba}}}}}}}_{0.2}{{{{{{\rm{Sr}}}}}}}_{0.3}{{{{{{\rm{FeO}}}}}}}_{3}\mathop{\longrightarrow }\limits_{{{{{{\rm{After}}}}}}\,{{{{{\rm{reduction}}}}}}}\frac{1}{2}{({{{\Pr }}}_{0.5}{{{{{{\rm{Ba}}}}}}}_{0.2}{{{{{{\rm{Sr}}}}}}}_{0.3})}_{2}{{{{{{\rm{FeO}}}}}}}_{4}+\frac{1}{2}{{{{{\rm{Fe}}}}}}+\frac{1}{2}{{{{{{\rm{O}}}}}}}_{2}$$ Phase reconstruction tendency analysis from phase diagram To precisely analyze the phase reconstruction tendency for Pr0.5Ba0.5−xSrxFeO3−δ (x = 0, 0.1, 0.2, 0.25, 0.3, 0.4, and 0.5) materials, in-situ XRD measurements were systematically conducted in various reduction temperatures and Sr2+ concentrations. (Fig. 2a and Supplementary Fig. 6). Figure 2b displays the proposed phase diagram and the corresponding plotted points after in-situ XRD measurements in H2 with elevating temperature intervals of 10 °C. The A-PBSF00 sample remained simple perovskite structure for all reduction temperature range and co-segregation of Fe metal and Pr oxide was observed simultaneously at the reduction temperature higher than 840 °C (Region II in Fig. 2b). Even though there was noticeable phase reconstruction for all Sr2+-doped samples, complete phase reconstruction to R-P perovskite was not accomplished for Pr0.5Ba0.4Sr0.1FeO3−δ (A-PBSF10), Pr0.5Ba0.3Sr0.2FeO3−δ (A-PBSF20), and Pr0.5Ba0.25Sr0.25FeO3−δ (A-PBSF25) even at the reduction temperature of 870 °C. (Region III in Fig. 2b). On the contrary, complete phase reconstruction to R-P perovskite is observed for A-PBSF30, Pr0.5Ba0.1Sr0.4FeO3−δ (A-PBSF40), and A-PBSF50 at the reduction temperature of approximately 850 °C (Region IV in Fig. 2b). These results indicate that the 'x' value in PBSF should be at least approximately 0.3 along with the reduction temperature of about 850 °C to accomplish complete phase reconstruction, as illustrated in Fig. 2c. Fig. 2: Examination of phase reconstruction behavior for Pr0.5Ba0.5−xSrxFeO3−δ material under reducing condition. a–c Phase reconstruction tendency of Pr0.5Ba0.5−xSrxFeO3−δ material (x = 0, 0.1, 0.2, 0.25, 0.3, 0.4, and 0.5). a In-situ powder X-ray diffraction (XRD) patterns of Pr0.5Ba0.2Sr0.3FeO3−δ (A-PBSF30) under H2 environment. b Proposed phase diagram of Pr0.5Ba0.5−xSrxFeO3−δ material (x = 0, 0.1, 0.2, 0.25, 0.3, 0.4, and 0.5) in H2 environment as functions of reduction temperature and Sr2+ concentration from in-situ XRD measurements. The phases for region I (gray), II (green), III (blue), and IV (red) are simple perovskite, simple perovskite + Pr oxide + Fe metal, simple perovskite + Ruddlesden-Popper (R-P) perovskite + Fe metal, and R-P perovskite + Fe metal, respectively. c Schematic illustration of the above phase diagram. d, e Density functional theory (DFT) calculations. Calculated profiles of d the relative total energy required for the phase reconstruction from simple perovskite to R-P perovskite and e oxygen vacancy formation energies and co-segregation energies as a function of Sr2+ concentration in four models. Effect of Sr2+ concentration on phase reconstruction and exsolution The role of Sr2+ concentration in PBSF in terms of phase reconstruction tendency to R-P perovskite was additionally explored using density functional theory (DFT) calculations. Figure 2d shows the required total energies for the phase reconstruction (Erecon) from simple perovskite to R-P perovskite of four model structures with different Ba2+/Sr2+ ratio. The Erecon decreases with increasing Sr2+ concentration in PBSF, indicating that the incorporation of Sr2+ into Ba2+ site promotes the phase reconstruction to R-P perovskite (Supplementary Fig. 7). Furthermore, the Evf-O of four simple perovskite models were calculated in Fig. 2e. More negative Evf-O value implies easier formation of oxygen vacancies in the reduction environment35. The Evf-O value becomes more negative after doping Sr2+ into Ba2+, revealing that the Sr2+ doping facilitates the formation of oxygen vacancies in the reduction atmosphere. This trend could be elucidated by the decrease in tolerance factor after replacement of Ba2+ by Sr2+ (Supplementary Table 3)36. A co-segregation energy (Eco-seg) associated to the degree of exsolution for B-site transition metal cations under reducing condition was also calculated (Fig. 2e). Interestingly, Eco-seg decreased as the Sr2+ contents increased, suggesting the enhanced degree of Fe exsolution with increasing Sr2+ concentration. Transmission electron microscopy analysis On the basis of proposed phase diagram and DFT calculations, the A-PBSF30, the minimum Sr2+-doped sample demonstrating complete reconstruction to R-P perovskite, is selected as the target material for structural analysis. The transmission electron microscopy (TEM) and scanning TEM (STEM) analysis were successfully performed to visually probe the complete phase reconstruction from simple perovskite to R-P perovskite of A-PBSF30 material (Fig. 3 and Supplementary Fig. 8). From the high-resolution TEM images and corresponding fast-Fourier transformed (FFT) patterns, the lattice spaces between planes of A-PBSF30 and R-PBSF30 are 0.395 nm (Fig. 3a) and 0.634 nm (Fig. 3d), which are consistent with the lattice constant of (001) plane for simple perovskite and the lattice constant of (002) plane for R-P perovskite, respectively. Furthermore, the atomic-scale observations of A-PBSF30 and R-PBSF30 were definitely validated from high-angle annular dark-field (HAADF) STEM images, of which only technically elusive [100] direction is mandatory for R-P perovskite. The locations of cations are well-matched with the simple perovskite (Fig. 3b) and R-P perovskite (Fig. 3e) because the atomic column intensity is proportional to the Z~2 (Z is the atomic number)37, thereby the bright and dark columns are the A-site (i.e., Pr, Ba, and Sr (green)) and the B-site (i.e., Fe (purple)), respectively, in the HAADF-STEM mode. Fig. 3: Electron microscopic analysis. a, b, d, e Transmission electron microscopy (TEM) analysis. a High-resolution (HR) TEM image and the corresponding fast-Fourier transformed (FFT) pattern of Pr0.5Ba0.2Sr0.3FeO3−δ (A-PBSF30) with zone axis (Z.A.) = [100] and b high-angle annular dark-field (HAADF) scanning TEM (STEM) image of A-PBSF30 with simple perovskite structure of [100] direction with d-spacing 001. d HR TEM image and the corresponding FFT pattern of (Pr0.5Ba0.2Sr0.3)2FeO4+δ – Fe metal (R-PBSF30) with Z.A. = [100] and e HAADF STEM image and the atomic arrangement of R-PBSF30 of [100] direction with d-spacing 001. c, f Scanning electron microscope (SEM) images. SEM images presenting the surface morphologies of c A-PBSF30 sintered at 1200 °C for 4 h in air atmosphere and f R-PBSF30 reduced at 850 °C for 4 h in humidified H2 environment (3% H2O). g–i Scanning TEM-energy dispersive spectroscopy (EDS) analysis. g HAADF image of R-PBSF30 and elemental mapping of Pr, Ba, Sr, Fe, and O, respectively. h, i EDS spectra of h the exsolved Fe metal particle (Spectrum 1, red) and i the parent material (Pr0.5Ba0.2Sr0.3)2FeO4+δ (Spectrum 2, black). Examination and characterization of exsolved particle size In general, particle size and surface distribution of catalysts have a considerable influence on the catalytic activity4,38. As such, the particle size and surface distribution of exsolved metal particles via reduction treatment could impact on the electro-catalytic activity of catalysts. Before examining the electro-catalytic effect of the in-situ exsolved Fe metal particles, an explicit comparison of exsolved particle size and surface distribution for R-PBSF00, R-PBSF30, and R-PBSF50 samples were presented in scanning electron microscope (SEM) images (Fig. 3c, f and Supplementary Figs. 9, 10). As shown in Fig. 3f, many small particles with size of about 100 to 200 nm are observed and uniformly socketed onto the perovskite oxide matrix after reduction, which are speculated as Fe metal particles. In contrast, the size of exsolved particles was relatively larger for R-PBSF00 and R-PBSF50 (Supplementary Figs. 11, 12). The energy dispersive spectroscopy (EDS) spectrum and the elemental mapping images of R-PBSF30 also clearly revealed that Fe metal particle with size of about 150 nm is well-socketed onto the R-P perovskite after reduction (Fig. 3g, h). Furthermore, noticeable energy shift to the higher energy in X-ray absorption near-edge structure (XANES), much increase in Fe-Fe shell intensity from the Fourier-transformed extended X-ray absorption fine structure (EXAFS) spectra after reduction, and the presence of Fe0 2p1/2 peak for only R-PBSF30 from X-ray photoelectron spectroscopy (XPS) measurements confirm the exsolution of Fe metal onto the surface under reducing condition39, in coincidence with the above experimental results (Fig. 4). To investigate the electrically conductive properties of the exsolved Fe metal particles, the electrical conductivities as a function of temperature under reducing atmosphere were measured for PBSF (Supplementary Fig. 13). The A-PBSF30 displayed the highest electrical conductivity value compared to other PBSF in the reduction environment coupled with sufficiently high electrical conductivity in the air atmosphere (Supplementary Fig. 13b)40, suggesting that the A-PBSF30 is the potential electrode material for S-SOC electrode material. Fig. 4: Oxidation state characterization. a, b Fe K-edge X-ray absorption near-edge structure (XANES) spectra of Pr0.5Ba0.2Sr0.3FeO3−δ (A-PBSF30), (Pr0.5Ba0.2Sr0.3)2FeO4+δ (R-PBSF30) with two references (Fe foil and Fe2O3). c Fourier-transformed Fe K-edge extended X-ray absorption fine structure (EXAFS) spectra of A-PBSF30 and R-PBSF30. X-ray photoelectron spectra (XPS) of Fe 2p1/2 for d A-PBSF30 and e R-PBSF30. Electrochemical performance evaluation Prior to assessment of electrochemical performance for A-PBSF30 in the practical application of S-SOCs, great thermo-chemical compatibility between all PBSF and the La0.9Sr0.1Ga0.8Mg0.2O3−δ (LSGM) electrolyte was confirmed by XRD measurement (Supplementary Fig. 14). Moreover, similar microstructures of air-sintered PBSF samples imply that the electrochemical performance would not be affected by surface morphology (Supplementary Fig. 15). Then, electrochemical performance of symmetrical solid oxide fuel cells (S-SOFCs) using PBSF as both electrodes was characterized by LSGM electrolyte-supported cells to identify the huge impact of the exsolved Fe metal particle size and surface distribution (Fig. 5a and Supplementary Figs. 16, 17). The peak power density of the A-PBSF30 symmetrical cell is 1.23 W cm−2 at 800 °C with humidified H2 (3% H2O) as fuel. This outstanding cell performance is the highest out of open literature based on LSGM electrolyte-supported S-SOFCs without any external catalysts at 800 °C under humidified H2 (3% H2O) as fuel to our best knowledge (Fig. 5b and Table 1)28,39,41,42,43,44,45,46. In addition, peak power output of 0.73 W cm−2 was demonstrated in humidified C3H8 (3% H2O) at 800 °C (Fig. 5c and Supplementary Fig. 18). Furthermore, the A-PBSF30 symmetrical cell demonstrated fairly stable current density without observable degradation for about 200 h in H2 and 150 h in C3H8 at 700oC (Fig. 5d, e). We also evaluated the electrochemical performance of the A-PBSF30 symmetrical cell in co-electrolysis mode. The excellent current density of −1.62 A cm−2 at a cell voltage of 1.5 V (close to thermo-neutral voltage)47 at 800 °C under co-electrolysis condition was demonstrated for the A-PBSF30 symmetrical cell (Fig. 5f), which is exceptionally high compared to other oxygen-conducting solid oxide electrolysis cell (SOEC) systems with different electrode materials19,31,48,49. The in-operando quantitative analysis of the synthetic gas products (H2 and CO) was further investigated via gas chromatography (GC) profiles for the A-PBSF30 symmetrical cell at 800 °C during co-electrolysis of H2O and CO2 (Supplementary Fig. 19). The amount of generated H2 and CO were measured to be 0.50 and 10.81 ml min−1 cm−2, respectively, implying that the A-PBSF30 symmetrical cell could efficiently produce synthetic gas during co-electrolysis50. Together with superior electrochemical performance and efficient synthetic gas production, relatively constant voltage was observed for 100 h under a constant current load of −0.25 A cm−2 at 700 °C in co-electrolysis mode (Fig. 5g), representing great durability in continuous SOEC operation. It is noteworthy that in-situ exsolution of well-dispersed Fe metal particles after complete phase reconstruction to R-P perovskite matrix acts as catalysts with promising electro-catalytic activity (Fig. 6), leading to outstanding electrochemical performances in various applications. Fig. 5: Electrochemical performance measurements. a, b Comparison of the maximum power density values at 800 °C in H2. (a) in terms of Pr0.5Ba0.5−xSrxFeO3−δ compositions (x = 0, 0.3, and 0.5) and b of the present work and other LSGM electrolyte-supported studies with symmetrical cell configuration at various temperature regimes. c I–V curves and the corresponding power densities of symmetrical cell with (Pr0.5Ba0.2Sr0.3)2FeO4+δ – Fe metal (R-PBSF30) fuel electrode at 800 °C under H2 and C3H8 humidified fuels (3% H2O) fed on the fuel electrode and air fed on the air electrode. d, e Durability test of symmetrical cell with R-PBSF30 fuel electrode recorded with respect to time at a constant voltage of 0.6 V at 700 °C under d H2 and e C3H8 humidified fuels. f I–V curves for symmetrical cell with R-PBSF30 fuel electrode with humidified H2 and CO2 with H2O co-fed to the fuel electrode side and air fed to the air electrode. g Durability test of symmetrical cell with R-PBSF30 fuel electrode recorded at a constant current of −0.25 A cm−2 at 700 °C during co-electrolysis for 100 h. Table 1 Comparison of the electrochemical performance of La1-xSrxGa1-yMgyO3−δ (LSGM) electrolyte-supported symmetrical solid oxide fuel cells (S-SOFCs) reported in the literature and in the present study. Fig. 6: Schematic illustration of this work. Schematic illustration of the fuel electrode side of Pr0.5Ba0.5−xSrxFeO3−δ (x = 0, 0.3, and 0.5) symmetrical cells and its relation to electrochemical performances. In summary, this study successfully calculated Gvf-O value from PrO and TO2 in Pr0.5(Ba/Sr)0.5TO3−δ (T = Mn, Fe, Co, and Ni) as the key factor for identifying the type of the phase reconstruction. Remarkably, the phase diagram acquired from in-situ temperature and environment-controlled XRD measurements indicated that the complete phase reconstruction to R-P perovskite occurs at least approximately x = 0.3 above at the reduction temperature of 850 °C for PBSF system. Among PBSF with complete phase reconstruction, the highly-populated Fe metal particles socketed on R-PBSF30 attributed to excellent electrochemical performances under both fuel cell (1.23 W cm−2 at 800 °C under H2 fuel) and electrolysis cell (−1.62 A cm−2 at 1.5 V and 800 °C under CO2 and H2O fuels) modes coupled with great durability. Our investigations strongly provide a pathway to explore new factors for the phase reconstruction and offer a new opportunity to discover prospective candidates with customized functionalities for next-generation energy-related applications. Material synthesis Pr0.5Ba0.5−xSrxFeO3−δ samples (x = 0, 0.1, 0.2, 0.3, 0.4, and 0.5, abbreviated as PBSF in Supplementary Table 1) and Pr0.5Sr0.5MnO3−δ were synthesized by the Pechini method. For PBSF materials, stoichiometric amounts of Pr(NO3)3·6H2O (Aldrich, 99.9%, metal basis), Ba(NO3)2 (Aldrich, 99 + %), Sr(NO3)2 (Aldrich, 99+%) and Fe(NO3)3·9H2O (Aldrich, 98+%) nitrate salts were dissolved in distilled water with the addition of quantitative amounts of citric acid and poly-ethylene glycol, while for Pr0.5Sr0.5MnO3−δ material, stoichiometric amounts of Pr(NO3)3·6H2O (Aldrich, 99.9%, metal basis), Sr(NO3)2 (Aldrich, 99+%) and Mn(NO3)2·4H2O (Aldrich, 97+%) nitrate salts were dissolved in distilled water with the addition of quantitative amounts of citric acid and poly-ethylene glycol. After removal of excess resin by heating at 280 °C, transparent organic resins containing metals in a solid solution were formed. The resins were calcined at 600 °C for 4 h and then sintered at 1200 °C for 4 h in air environment. The chemical compositions of the synthesized powders and their abbreviations are given in Supplementary Table 1. The crystal structures of the Pr0.5Ba0.5−xSrxFeO3−δ samples (x = 0, 0.3, and 0.5) and Pr0.5Sr0.5MnO3−δ after heat-treated in two different environmental conditions (sintered at 1200 °C for 4 h in air environment and reduced at 850 °C for 4 h in humidified H2 environment (3% H2O)) were first identified by powder XRD patterns (Bruker diffractometer (LYNXEYE 1D detector), Cu Kα radiation, 40 kV, 40 mA) in the 2 theta range of 20° < 2θ < 60°. To calculate the exact Bravais lattice of the PBSF, the samples were first pressed into pellets at 2 MPa for 30 s and then sintered at 1200 °C for 4 h in air atmosphere. The XRD patterns of air-sintered PBSF series and (Pr0.5Ba0.2Sr0.3)2FeO4+δ – Fe metal (R-PBSF30) samples were further measured by high-power (HP) XRD. (Max 2500 V, Cu Kα radiation, 40 kV, 200 mA) at a scanning rate of 1° min−1 and a range of 15° < 2θ < 105°. After the HP XRD measurement, the powder patterns and lattice parameters were analyzed by the Rietveld refinement technique using the GSAS II program. The surface analysis of Pr0.5Ba0.2Sr0.3FeO3−δ (A-PBSF30) sintered at 1200 °C for 4 h in air atmosphere and (Pr0.5Ba0.2Sr0.3)2FeO4+δ – Fe metal (R-PBSF30) reduced at 850 °C for 4 h in humidified H2 environment (3% H2O) were conducted on XPS analyses on ESCALAB 250 XI from Thermo Fisher Scientific with a monochromated Al-Kα (ultraviolet He1, He2) X-ray source. The X-ray absorption fine structure (XAFS) spectra of Fe K-edge for A-PBSF30, R-PBSF30, and two references (Fe foil and Fe2O3 powder) were measured on ionization detectors under fluorescence mode at the Pohang Accelerator Laboratory (PAL, 6D extended XAFS (EXAFS)). The XAFS and Fourier-transformed (FT) EXAFS spectra analysis were performed using the Athena (Demeter) program. In-situ phase reconstruction tendency evaluation The in-situ phase reconstruction tendency of Pr0.5Ba0.5−xSrxFeO3−δ (x = 0, 0.1, 0.2, 0.25, 0.3, 0.4, and 0.5) samples were identified by in-situ XRD measurements under humidified H2 condition (3% H2O). The Pr0.5Ba0.25Sr0.25FeO3−δ (A-PBSF25) sample was additionally synthesized by the Pechini method to evaluate the phase reconstruction tendency under reducing atmosphere. The Pr0.5Ba0.5−xSrxFeO3−δ (x = 0, 0.1, 0.2, 0.25, 0.3, 0.4, and 0.5) samples were sintered at 1200 °C for 4 h in air atmosphere to form simple perovskite structure with fine crystallinity. The reduction temperatures were ranged from 700 to 870 °C and 2 h were delayed at each temperature interval (Bruker D8 advance). Electron microscopy analysis The microstructures of (1) Pr0.5Ba0.5−xSrxFeO3−δ samples (x = 0, 0.3, and 0.5) sintered at 1200 °C for 4 h in air atmosphere, (2) Pr0.5Ba0.5−xSrxFeO3−δ samples (x = 0, 0.3, and 0.5)reduced at 850 °C for 4 h in humidified H2 environment (3% H2O), and (3) all PBSF samples sintered at 950 °C for 4 h in air atmosphere were investigated by using an SEM (Nova Nano FE-SEM). TEM analyses were conducted with a JEOL JEM 2100 F with a probe forming (STEM) Cs (spherical aberration) corrector at 200 kV. Electrical conductivity measurements The electrical conductivities of PBSF with respect to temperature were measured under air and 5% H2 environments by the 4-probe method. The samples were pressed into pellets of cylindrical shape and then sintered at 1400 °C for 4 h in air environment to reach an apparent density of ~90%. The electrical conductivities were first measured in air atmosphere from 300 to 800 °C with intervals of 50 °C, and then measured in wet 5% H2 atmosphere (Ar balance, 3% H2O) from 300 to 800 °C with intervals of 50 °C. The current and voltage were recorded by a Biologic Potentiostat to calculate the resistance, resistivity, and conductivity of samples. Computational methods DFT calculations were performed to investigate the appropriate dopants for the phase reconstruction to n = 1 R-P perovskite along with the role of Sr2+ concentration on phase reconstruction tendency of PBSF using the Vienna ab initio Simulation Package51,52. For the exchange-correlation, the generalized gradient approximation (GGA) based Predew-Burke-Ernzerhof functional was used53. The electron-ion interactions were described using the projector augmented wave potential54,55. A plane wave was expanded up to cutoff energy of 400 eV. Electronic occupancies were calculated using Gaussian smearing with a smearing parameter of 0.05 eV. For the bulk optimization, all internal atoms were relaxed using a conjugate gradient algorithm until the forces of each atom were lowered below 0.03 eV/Å with an energy convergence of 10−5 eV. GGA + U approach was used to correct the self-interaction errors with Ueff = 4.0 eV for Fe 3d orbital, Ueff = 3.3 eV for Co 3d orbital, Ueff = 4.0 eV for Mn 3d orbital, Ueff = 7.0 eV for Ni 3d orbital, and Ueff = 6.0 eV for Pr 4f orbital18,56,57. For the Brillouin zones of the formation energy calculation of cubic perovskite (2 × 2 × 4 super cell) and n = 1 R-P perovskite (2 × 2 × 1 super cell), 3 × 3 × 1 and 3 × 3 × 2 Monkhorst-Pack k-point sampling were used, respectively58. For the oxygen vacancy formation energy calculations of BO2 layer between two AO layers, PrO-terminated (001) slab model (2 × 2 surface, 8 layers with 2 fixed bottom layers, vacuum layer of 16 Å) were used. For the co-segregation energy calculations, FeO2-terminated (001) slab model (\(2\surd 2\) × 2\({\surd{2}}\) surface, 8 layers with 3 fixed bottom layers, vacuum layer of 16 Å) were used. For the Brillouin zones of the oxygen vacancy formation energy and the co-segregation energy calculations, 3 × 3 × 1 and 1 × 1 × 1 Monkhorst-Pack k-point sampling were used. The optimized lattice parameters of four materials, A-PBSF00 (Ba:Sr = 16:0), Pr0.5Ba0.1875Sr0.3125FeO3−δ (Ba:Sr = 6:10), Pr0.5Ba0.125Sr0.375FeO3−δ (Ba:Sr = 4:12), and A-PBSF50 (Ba:Sr = 0:16) were used for model structures in the computational studies. For the Ba2+/Sr2+ mixed models, the most stable configurations among the total 5 different Ba configurations were used. The relative energies required for the phase reconstruction from simple perovskite to n = 1 R-P perovskite (Erecon) of six model structures with different Sr2+ concentration were calculated using the total energy difference between simple perovskite and n = 1 R-P perovskite by following equation: $${E}_{{recon}}=\frac{1}{16}{E}_{R-{P perov}}+\frac{1}{2}{E}_{{Fe}}+\frac{1}{2}{E}_{{O}_{2}}-\frac{1}{16}{E}_{{simple perov}},$$ Where ER-P perov and Esimple perov are the total energy of simple perovskite (2 × 2 × 4 super cell) and R-P perovskite (2 × 2 × 1 super cell), \({E}_{{Fe}}\) is total energy of body-centered cubic Fe metal unit cell, and \({E}_{{O}_{2}}\) is the total energy of gas phase oxygen molecule. The oxygen vacancy formation energies (Evf-O) of Pr0.5Ba0.5TO3−δ, Pr0.5Sr0.5TO3−δ (T = Mn, Co, Fe, and Ni), and four model structures with different Sr2+ concentrations were calculated using the lattice oxygen on the BO2 layer since the phase reconstruction from simple perovskite (ABO3) to n = 1 R-P perovskite (A2BO4) requires the formation of both oxygen and B-site vacancies. For the Pr0.5Ba0.5TO3−δ and Pr0.5Sr0.5TO3−δ (T = Mn, Co, Fe, and Ni) models, the most stable structure configurations were utilized for the oxygen vacancy formation energy calculations. For the four computational models with different Sr2+ concentrations, the most stable vacancy sites were utilized for Ba2+/Sr2+ mixed models with Ba:Sr = 6:10 and Ba:Sr = 4:12. The Evf-O was calculated by following equation: $${E}_{{vf}-O}={E}_{{perov}-{defect}}+\frac{1}{2}{E}_{{O}_{2}}-{E}_{{perov}}$$ where \({E}_{{perov}-{defect}}\) and \({E}_{{perov}}\) are the total energies of PrO-terminated (001) perovskite slab model with and without the oxygen vacancy, respectively. The co-segregation energy (Eco-seg) is defined as the total energy difference of two surface models that have different vacancy site. The co-segregation energies of four computational models with different Sr2+ concentrations were calculated by following equation: $${E}_{{co}-{seg}}={E}_{\left({Fe}-{V}_{O}\right){{{{{\rm{\_}}}}}}{surface}}-{E}_{\left({Fe}-{V}_{O}\right){{{{{\rm{\_}}}}}}{bulk}}$$ where \({E}_{\left({Fe}-{V}_{O}\right){\_surface}}\) and \({E}_{\left({Fe}-{V}_{O}\right){\_bulk}}\) are total energies of FeO2-terminated (001) perovskite slab model that have oxygen vacancy on surface FeO2 and bulk FeO2 layer, respectively. Furthermore, the Gibbs free energy for oxygen vacancy formation of eight samples were calculated to include the temperature and oxygen partial pressure factors in the Evf-O calculations. The equations used for the Gvf-O calculations from the surface AO and BO2 layers in Pr0.5(Ba/Sr)0.5TO3−δ (T = Mn, Fe, Co, and Ni) are as follows: $${G}_{{vf}-O}{\left({{{{{\rm{AOlayer}}}}}}\right)=E}_{{perov}-{defect}}-{E}_{{perov}}+{\frac{1}{2}\mu }_{{O}_{2}}$$ $${\mu }_{{O}_{2}}={E}_{{O}_{2}\left(g\right)}^{{DFT}}+{E}_{{O}_{2}\left(g\right)}^{{ZPE}}+{E}_{{O}_{2}\left(g\right)}^{{correction}}-T{S}_{{O}_{2}\left(g\right)}+{k}_{B}T{{{{{\rm{ln}}}}}}\left(\tfrac{{P}_{{O}_{2}}}{{P}_{0}}\right)$$ $${G}_{{vf}-O}{\left({{{{{{\rm{BO}}}}}}}_{2}{{{{{\rm{layer}}}}}}\right)=E}_{{perov}-{defect}}-{E}_{{perov}}+\left({\mu }_{{H}_{2}O}-{\mu }_{{H}_{2}}\right)+{E}_{a}^{{O}_{{vac}}{diffusion}}$$ $${\mu }_{{H}_{2}O}= \; \left(\triangle {H}_{{H}_{2}O}^{{\exp }}+{E}_{{H}_{2}\left(g\right)}^{{DFT}}+{E}_{{H}_{2}\left(g\right)}^{{ZPE}}+\frac{1}{2}\left({E}_{{O}_{2}\left(g\right)}^{{DFT}}+{E}_{{O}_{2}\left(g\right)}^{{ZPE}}+{E}_{{O}_{2}\left(g\right)}^{{correction}}\right)\right)\\ -T{S}_{{H}_{2}O\left(g\right)}+{k}_{B}T{{{{{\rm{ln}}}}}}\left(\tfrac{{P}_{{H}_{2}O}}{{P}_{0}}\right)$$ $${\mu }_{{H}_{2}}={E}_{{H}_{2}\left(g\right)}^{{DFT}}+{E}_{{H}_{2}\left(g\right)}^{{ZPE}}-T{S}_{{H}_{2}\left(g\right)}+{k}_{B}T{{{{{\rm{ln}}}}}}\left(\tfrac{{P}_{{H}_{2}}}{{P}_{0}}\right)$$ The \({E}_{{perov}-{defect}}\) and \({E}_{{perov}}\) are the total energies of PrO-terminated (001) perovskite slab model with and without the oxygen vacancy, respectively. The \({\mu }_{{O}_{2}}\), \({\mu }_{{H}_{2}}\), and \({\mu }_{{H}_{2}O}\) are the Gibbs free energy of di-atomic oxygen molecule, hydrogen, and water molecule, respectively. The \({E}_{{O}_{2}\left(g\right)}^{{DFT}}\) and \({E}_{{H}_{2}\left(g\right)}^{{DFT}}\) is the gas phase energy of ground state triplet O2 molecule and hydrogen molecule, respectively. The zero point energies of oxygen molecule (\({E}_{{O}_{2}\left(g\right)}^{{ZPE}}\)) and hydrogen molecule (\({E}_{{H}_{2}\left(g\right)}^{{ZPE}}\)) were extracted from the previous calculated value59. The standard entropy of gas phase oxygen (\({S}_{{O}_{2}\left(g\right)}\)) was obtained from National Institute of Standards and Technology Chemistry Web-Book (http://webbook.nist.gov/chemistry). Moreover, the correction energy of oxygen molecule (\({E}_{{O}_{2}\left(g\right)}^{{correction}}\)) was added to reconcile the Evf-O differences between the results achieved via computational method (GGA functional) and real experimental results60. The temperature and p(O2) are 750oC and 10−9 atm for Gvf-O calculations at the surface AO layer (Eq. 6 and 7) and the temperature, p(O2), p(H2), and p(H2O) values are 750oC, 10−9, 0.1, and 0.01 atm, respectively, for Gvf-O calculations at the BO2 layer (Eq. 8, 9, and 10). Under this specified condition (reducing condition), we assumed that the reduction of BO2 layer occurred via two elementary steps: surface hydrogen oxidation reaction (\({O}_{{lattice}}+{H}_{2}\left(g\right)\leftrightarrow {H}_{2}O\left(g\right)\)) and oxygen vacancy diffusion toward the BO2 layer. The activation energy of oxygen vacancy diffusion (\({E}_{a}^{{O}_{{vac}}{diffusion}}\): 0.95 eV) was calculated from the electrochemical measurements (Arrhenius plot of area specific resistance) of Pr0.4Sr0.6Fe0.875Mo0.125O3−δ (PSFM) material under H2 condition61. Electrochemical performance measurements La0.9Sr0.1Ga0.8Mg0.2O3−δ (LSGM) powder was prepared by conventional solid-state reaction to fabricate LSGM electrolyte-supported symmetrical S-SOCs. Stoichiometric amounts of La2O3 (Sigma, 99.99%), SrCO3 (Sigma, 99.99%), Ga2O3 (Sigma, 99.99%) and MgO (Sigma, 99.9%) powders were first mixed in a mortar and then ball-milled in ethanol for 24 h to obtain the desired composition. After drying, the obtained powder was calcined at 1000 °C for 6 h. After formation of LSGM powder with desired stoichiometry, the electrolyte substrate was prepared by pressing at 2 MPa for 30 s into cylindrical shape and then sintered at 1475 °C for 5 h. The thickness of LSGM electrolyte was polished to about 250 μm. A La0.6Ce0.4O2-δ (LDC) as a buffer layer was also prepared by ball-milling stoichiometric amounts of La2O3 (Sigma, 99.99%) and CeO2 (Sigma, 99.99%) in ethanol for 24 h and then calcined at 1000 °C for 6 h. Electrode slurries were prepared by mixing pre-calcined powders of PBSF with an organic binder (Heraeus V006) and acetone in 3:6:0.6 weight ratio. The electrode inks were applied onto the LSGM pellet by a screen-printing method and then sintered at 950 °C for 4 h in air to achieve the desired porosity. The porous electrodes had an active area of 0.36 cm2 and a thickness of about 15 μm. The LDC layer was screen-printed between the electrode and electrolyte to prevent inter-diffusion of ionic species between electrode and electrolyte. The cells with configuration of Electrode |LDC | LSGM | LDC | Electrode were mounted on alumina tubes with ceramic adhesives (Ceramabond 552, Aremco) for electrochemical performance tests (Cross-sectional SEM image of the A-PBSF30 symmetrical cell given in Supplementary Fig. 20). Silver paste and silver wire were utilized for electrical connections to both the fuel electrode and air electrode. The entire cell was placed inside a furnace and heated to the desired temperature. I–V polarization curves of synthesized fuel cells with PBSF as both sides of electrodes were measured using a BioLogic Potentiostat in a temperature range of 700 to 800 °C (temperature interval: 50 °C) in humidified hydrogen (3% H2O) at a flow rate of 100 ml min−1. Fuel cell evaluation under humidified C3H8 fuel (3% H2O) at a flow rate of 100 ml min−1 were also performed for symmetrical solid oxide fuel cell (S-SOFC) test with cell composition A-PBSF30 | LDC | LSGM | LDC | A-PBSF30 from 700 to 800 °C (temperature interval: 50 °C) using a BioLogic Potentiostat. For the electrochemical performance test of S-SOC with the cell composition of A-PBSF30 | LDC | LSGM | LDC | A-PBSF30 during co-electrolysis, 50 ml min−1 of H2 and CO2 into a H2O-containing bubbler (with a heating tape) were co-fed to fuel electrode and 100 ml min−1 of air was fed to air electrode. The in-operando quantitative analysis of the generated synthetic gas (H2 and CO) during co-electrolysis of CO2 and H2O (Ratio of CO2:H2:H2O = 45:45:10) for the A-PBSF30 symmetrical cell (@800 °C and 1.5 V) was demonstrated by the gas chromatograph (Agilent 7820 A GC instrument) with a thermal conductivity detector and a packed column (Agilent carboxen 1000). The data measured, simulated, and analyzed in this study are available from the corresponding author on reasonable request. The original version of the Peer Review File associated with this Article was updated shortly after publication. Sengodan, S. et al. Layered oxygen-deficient double perovskite as an efficient and stable anode for direct hydrocarbon solid oxide fuel cells. Nat. Mater. 14, 205–209 (2015). ADS CAS PubMed Google Scholar Neagu, D., Tsekouoras, G., Miller, D. N., Menard, H. & Irvine, J. T. S. In situ growth of nanoparticles through control of non-stoichiometry. Nat. Chem. 5, 916–923 (2013). Kang, K. N. et al. Co3O4 exsolved defective layered perovskite oxide for energy storage systems. ACS Energy Lett. 5, 3828–3836 (2020). Joo, S. et al. Cation-swapped homogeneous nanoparticles in perovskite oxides for high power density. Nat. Commun. 10, 1–9 (2019). Opitz, A. K. et al. Understanding electrochemical switchability of perovskite-type exsolution catalysts. Nat. Commun. 11, 1–10 (2020). Irvine, J. T. S. et al. Evolution of the electrochemical interface in high-temperature fuel cells and electrolysers. Nat. Energy 1, 1–13 (2016). Ding, D., Li, X., Lai, S. Y., Gerdes, K. & Liu, M. Enhancing SOFC cathode performance by surface modification through infiltration. Energy Environ. Sci. 7, 552–575 (2014). Schlupp, M. V. F., Evans, A., Martynczuk, J. & Prestat, M. Micro-solid oxide fuel cell membranes prepared by aerosol-assisted chemical vapor deposition. Adv. Energy Mater. 4, 1–7 (2014). Kwak, N. W. et al. In situ synthesis of supported metal nanocatalysts through heterogeneous doping. Nat. Commun. 9, 1–8 (2018). ADS CAS Google Scholar Jun, A., Kim, J., Shin, J. & Kim, G. Achieving high efficiency and eliminating degradation in solid oxide electrochemical cells using high oxygen-capacity perovskite. Angew. Chem. Int. Ed. 55, 12512–12515 (2016). Choi, S. et al. A robust symmetrical electrode with layered perovskite structure for direct hydrocarbon solid oxide fuel cells: PrBa0.8Ca0.2Mn2O5+δ. J. Mater. Chem. A. 4, 1747–1753 (2016). Myung, J. H., Neagu, D., Miller, D. N. & Irvine, J. T. S. Switching on electrocatalytic activity in solid oxide cells. Nature 537, 528–531 (2016). Neagu, D. et al. In situ observation of nanoparticle exsolution from perovskite oxides: from atomic mechanistic insight to nanostructure tailoring. ACS Nano 13, 12996–13005 (2019). Sun, Y. et al. New opportunity for in situ exsolution of metallic nanoparticles on perovskite parent. Nano Lett. 16, 5303–5309 (2016). Kwon, O. et al. Self-assembled alloy nanoparticles in a layered double perovskite as a fuel oxidation catalyst for solid oxide fuel cells. J. Mater. Chem. A. 6, 15947–15953 (2018). Joo, S. et al. Highly active dry methane reforming catalysts with boosted in situ grown Ni-Fe nanoparticles on perovskite via atomic layer deposition. Sci. Adv. 6, 1–9 (2020). Zhang, J., Gao, M. R. & Luo, J. L. In situ exsolved metal nanoparticles: a smart approach for optimization of catalysts. Chem. Mater. 32, 5424–5441 (2020). Kwon, O. et al. Exsolution trends and co-segregation aspects of self-grown catalyst nanoparticles in perovskites. Nat. Commun. 8, 1–7 (2017). Liu, S., Liu, Q. & Luo, J. L. Highly stable and efficient catalyst with in situ exsolved Fe-Ni alloy nanospheres socketed on an oxygen deficient perovskite for direct CO2 electrolysis. ACS Catal. 6, 6219–6228 (2016). Sun, Y. et al. A-site deficient perovskite: the parent for in situ exsolution of highly active, regenerable nano-particles as SOFC anodes. J. Mater. Chem. A. 3, 11048–11056 (2015). Tsekouras, G., Neagu, D. & Irvine, J. T. S. Step-change in high temperature steam electrolysis performance of perovskite oxide cathodes with exsolution of B-site dopants. Energy Environ. Sci. 6, 256–266 (2013). Zhu, T., Troiani, H. E., Mogni, L. V., Han, M. & Barnett, S. A. Ni-substituted Sr(Ti,Fe)O3 SOFC anodes: achieving high performance via metal alloy nanoparticle exsolution. Joule 2, 478–496 (2018). Gao, Y., Chen, D., Saccoccio, M., Lu, Z. & Ciucci, F. From material design to mechanism study: nanoscale Ni exsolution on a highly active A-site deficient anode material for solid oxide fuel cells. Nano Energy 27, 499–508 (2016). Kim, K. J. et al. Facet-dependent in situ growth of nanoparticles in epitaxial thin films: the role of interfacial energy. J. Am. Chem. Soc. 141, 7509–7517 (2019). Neagu, D. et al. Demonstration of chemistry at a point through restructuring and catalytic activation at anchored nanoparticles. Nat. Commun. 8, 1–8 (2017). Yang, C. et al. In situ fabrication of CoFe alloy nanoparticles structured (Pr0.4Sr0.6)3(Fe0.85Nb0.15)2O7 ceramic anode for direct hydrocarbon solid oxide fuel cells. Nano Energy 11, 704–710 (2015). Lv, H. et al. In situ investigation of reversible exsolution/dissolution of CoFe alloy nanoparticles in a Co-doped Sr2Fe1.5Mo0.5O6-δ cathode for CO2 electrolysis. Adv. Mater. 32, 1906193 (2020). Yang, C. et al. Sulfur-tolerant redox-reversible anode material for direct hydrocarbon solid oxide fuel cells. Adv. Mater. 24, 1439–1443 (2012). Du, Z. et al. High-performance anode material Sr2FeMo0.65Ni0.35O6-δ with in situ exsolved nanoparticle catalyst. ACS Nano. 10, 8660–8669 (2016). Chung, Y. S. et al. In situ preparation of a La1.2Sr0.8Mn0.4Fe0.6O4 Ruddlesden-Popper phase with exsolved Fe nanoparticles as an anode for SOFCs. J. Mater. Chem. A. 5, 6437–6446 (2017). Park, S. et al. In situ exsolved Co nanoparticles on Ruddlesden-Popper material as highly active catalyst for CO2 electrolysis to CO. Appl. Catal. B Environ. 248, 147–156 (2019). Kim, K. et al. Mechanistic insights into the phase transition and metal ex-solution phenomena of Pr0.5Ba0.5Mn0.85Co0.15O3−δ from simple to layered perovskite under reducing conditions and enhanced catalytic activity. Energy Environ. Sci. 14, 873–882 (2021). Vibhu, V. et al. Characterization of PrNiO3-δ as oxygen electrode for SOFCs. Solid State Sci. 81, 26–31 (2018). Choi, S. et al. Highly efficient and robust cathode materials for low-temperature solid oxide fuel cells: PrBa0.5Sr0.5Co2−xFexO5+δ. Sci. Rep. 3, 1–6 (2013). Chen, C. & Ciucci, F. Designing Fe-based oxygen catalysts by density functional theory calculations. Chem. Mater. 28, 7058–7065 (2016). Brown, J. J., Ke, Z., Geng, W. & Page, A. J. Oxygen vacancy defect migration in titanate perovskite surfaces: effect of the A-site cations. J. Phys. Chem. C. 122, 14590–14597 (2018). Kwon, O. et al. Probing one-dimensional oxygen vacancy channels driven by cation-anion double ordering in perovskites. Nano Lett. 20, 8353–8359 (2020). Neagu, D. et al. Nano-socketed nickel particles with enhanced coking resistance grown in situ by redox exsolution. Nat. Commun. 6, 1–8 (2015). Niu, B. et al. In-situ growth of nanoparticles-decorated double perovskite electrode materials for symmetrical solid oxide cells. Appl. Catal. B Environ. 270, 118842 (2020). Kim, H., Joo, S., Kwon, O., Choi, S. & Kim, G. Cobalt-free Pr0.5Ba0.4Sr0.1FeO3-δ as a highly efficient cathode for commercial YSZ-supported solid oxide fuel cell. ChemElectroChem 7, 4378–4382 (2020). Liu, B. Q., Dong, X., Xiao, G., Zhao, F. & Chen, F. A novel electrode material for symmetrical SOFCs. Adv. Mater. 22, 5478–5482 (2010). Lu, X. et al. Mo-doped Pr0.6Sr0.4Fe0.8Ni0.2O3-δ as potential electrodes for intermediate-temperature symmetrical solid oxide fuel cells. Electrochim. Acta 227, 33–40 (2017). He, W., Wu, X., Dong, F. & Ni, M. A novel layered perovskite electrode for symmetrical solid oxide fuel cells: PrBa(Fe0.8Sc0.2)2O5+δ. J. Power Sources 363, 16–19 (2017). Zhang, Y., Zhao, H., Du, Z., Świerczek, K. & Li, Y. High-performance SmBaMn2O5+δ electrode for symmetrical solid oxide fuel cell. Chem. Mater. 31, 3784–3793 (2019). Cai, H. et al. Cobalt–free La0.5Sr0.5Fe0.9Mo0.1O3–δ electrode for symmetrical SOFC running on H2 and CO fuels. Electrochim. Acta 320, 134642 (2019). Lu, C. et al. Efficient and stable symmetrical electrode La0.6Sr0.4Co0.2Fe0.7Mo0.1O3–δ for direct hydrocarbon solid oxide fuel cells. Electrochim. Acta 323, 134857 (2019). Hansen, J. B. Solid oxide electrolysis - a key enabling technology for sustainable energy scenarios. Faraday Discuss 182, 9–48 (2015). Zhu, J. et al. Enhancing CO2 catalytic activation and direct electroreduction on in-situ exsolved Fe/MnOx nanoparticles from (Pr,Ba)2Mn2-yFeyO5+δ layered perovskites for SOEC cathodes. Appl. Catal. B Environ. 268, 118389 (2020). Zhou, Y. et al. Enhancing CO2 electrolysis performance with vanadium-doped perovskite cathode in solid oxide electrolysis cell. Nano Energy 50, 43–51 (2018). Kim, C. et al. Highly efficient CO2 utilization via aqueous zinc–or aluminum–CO2 systems for hydrogen gas evolution and electricity production. Angew. Chem. Int. Ed. 58, 9506–9511 (2019). Kresse, G. & Hafner, J. Ab initio molecular dynamics for liquid metals. Phys. Rev. B 47, 558–561 (1993). Sholl. D. & Steckel, J. A. Density functional theory: a practical introduction. (John Wiley & Sons, Inc., 2011) Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996). Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B. 50, 17953–17979 (1994). Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B. 59, 1758–1775 (1999). Kirklin, S. et al. The open quantum materials database (OQMD): assessing the accuracy of DFT formation energies. npj Comput. Mater. 1, 1–15 (2015). Bouadjemi, B., Bentata, S., Abbad, A., Benstaali, W. & Bouhafs, B. Half-metallic ferromagnetism in PrMnO3 perovskite from first principles calculations. Solid State Commun. 168, 6–10 (2013). Monkhorst, H. J. & Pack, J. D. Special points for Brillouin-zone integrations. Phys. Rev. B. 13, 5188 (1976). ADS MathSciNet Google Scholar Nørskov, J. K. et al. Origin of the overpotential for oxygen reduction at a fuel-cell cathode. J. Phys. Chem. B. 108, 17886–17892 (2004). Wang, L., Maxisch, T. & Ceder, G. Oxidation energies of transition metal oxides within the GGA + U framework. Phys. Rev. B. 73, 195107 (2006). Zhang, D. et al. Preparation and characterization of a redox-stable Pr0.4Sr0.6Fe0.875Mo0.125O3-δ material as a novel symmetrical electrode for solid oxide cell application. Int. J. Hydrog. Energy 45, 21825–21835 (2020). This work was supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea (No. 20213030030150) and the National Research Foundation (NRF) funded by the Ministry of Education (NRF-2019R1C1C1005801, NRF-2021M3I3A1084292, and NRF-2021R1A2C3004019). This work was also supported by "CO2 utilization battery for hydrogen production based on fault-tolerance deep learning" (No. 1.200097.01). The Xray absorption fine structure experiments performed at the beamline 6D of Pohang Accelerator Laboratory was supported by the Pohang University of Science and Technology (POSTECH) and Ulsan National Institute of Science and Technology Central Research Facilities center (UCRF). These authors contributed equally: Hyunmin Kim, Chaesung Lim and Ohhun Kwon. School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea Hyunmin Kim, Jinkyung Oh & Guntae Kim Department of Chemical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea Chaesung Lim, Matthew T. Curnan & Jeong Woo Han Department of Chemical and Biomolecular Engineering, University of Pennsylvania, Philadelphia, PA, 19104, USA Ohhun Kwon Department of Materials Science and Engineering and UNIST Central Research Facilities (UCRF), Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea Hu Young Jeong Department of Mechanical Engineering (Aeronautics, Mechanical and Electronic Convergence Engineering), Kumoh National Institute of Technology, Gyeongbuk, 39177, Republic of Korea Sihyuk Choi Hyunmin Kim Chaesung Lim Jinkyung Oh Matthew T. Curnan Jeong Woo Han Guntae Kim H.K. and O.K. carried out most of the experimental works and contributed to manuscript writing. C.L. performed DFT calculations. M.T.C. gave help on additional DFT calculations. J.O. performed the gas chromatography (GC) analysis. H.Y.J. conducted TEM measurements and analyzed the TEM images. S.C., J.W.H., and G.K. designed the experiments and analyzed the data. All authors contributed to the discussions and analysis of the results regarding the manuscript. Correspondence to Sihyuk Choi, Jeong Woo Han or Guntae Kim. Peer review file Kim, H., Lim, C., Kwon, O. et al. Unveiling the key factor for the phase reconstruction and exsolved metallic particle distribution in perovskites. Nat Commun 12, 6814 (2021). https://doi.org/10.1038/s41467-021-26739-1
CommonCrawl
Journal of Fluid Mechanics Thermophoresis of a spherical p... Core reader Thermophoresis of a spherical particle: modelling through moment-based, macroscopic transport equations 2 Problem formulation 2.1 Thermophoresis on a sphere by a uniform temperature gradient in the far field 2.2 Uniform flow past a sphere 2.3 Equivalent Stokes–Fourier system of equations 3 Solution applying the method of multipole potentials 3.1 Particular solutions 3.2 Solution of the homogeneous equations 3.3 Final field expressions – system of equations for the integration constants 4 Results and discussion 4.1 Thermophoresis on a heat-conducting sphere 4.1.1 Flow and temperature fields 4.1.2 Thermophoretic force 4.2 Uniform flow past a heat-conducting sphere 4.3 Thermophoretic velocity 5 Concluding remarks Journal of Fluid Mechanics, Volume 862 10 March 2019 , pp. 312-347 Juan C. Padrino (a1), James E. Sprittles (a2) and Duncan A. Lockerby (a1) 1School of Engineering, University of Warwick, Coventry CV4 7AL, UK 2Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK © 2019 Cambridge University Press This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. DOI: https://doi.org/10.1017/jfm.2018.907 Published online by Cambridge University Press: 10 January 2019 Figures: Figure 1. Sketch for the problem of thermophoresis on a sphere ( $G^{\ast }=\unicode[STIX]{x2202}T^{\ast }/\unicode[STIX]{x2202}z^{\ast }|_{\infty }$ , the far-field temperature gradient) or uniform flow past a sphere ( $G^{\ast }=U_{0}^{\ast }$ , the far-field gas velocity). The sphere's radius is denoted by $a^{\ast }$ . In both cases, the flow field is axisymmetric with respect to the $z^{\ast }$ axis. Unit vector $\text{k}$ points in the direction of the positive $z^{\ast }$ semi axis. The spherical coordinate system $\{r^{\ast },\unicode[STIX]{x1D717},\unicode[STIX]{x1D719}\}$ , with origin at the sphere's centre, is depicted. Figure 2. Profiles of (a) radial velocity, (b) polar velocity, (c) density deviation and (d) temperature deviation in the gas as functions of the radial coordinate for the problem of thermophoresis of a sphere with a uniform temperature (i.e. $\unicode[STIX]{x1D6EC}\rightarrow \infty$ ). Results are obtained from the R13 exact solution, the numerical solution of Sone (2007) and his asymptotic expression for $k\rightarrow 0$ . Sone's results are for a hard-sphere gas. Knudsen number $Kn$ is related to parameter $k$ as $Kn=\sqrt{2}\unicode[STIX]{x1D6FE}_{1}k/2$ , with $\unicode[STIX]{x1D6FE}_{1}=1.270042427$ from Sone (2007). Figure 3. Speed contours and velocity streamlines in the case of thermophoresis of a sphere for $Kn=0.02$ and 0.2, and $\unicode[STIX]{x1D6EC}=4$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ computed with the exact solution from R13. Far-field temperature gradient points to the right. Figure 4. Temperature contours and heat-flux streamlines in the case of thermophoresis of a sphere for $Kn=0.02$ and 0.2, and $\unicode[STIX]{x1D6EC}=4$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ computed with the exact solution from R13. Between the contour levels 0 and 1.0 ( $-1.0$ ), the levels shown correspond to 0.1, 0.3 and 0.5 ( $-0.1$ , $-0.3$ and $-0.5$ ); then they increase by 0.5 (decrease by $-0.5$ ). Far-field temperature gradient points to the right. Figure 5. Sketch of the thermal-stress slip flow on the surface of a sphere (gas motion from hot to cold). The thin lines represent isothermal surfaces in the gas; the thick line represents the sphere's surface (with uniform temperature $\unicode[STIX]{x1D703}_{s}$ ) and the dashed line is the axis of symmetry. Table 1. Coefficients used in expression (4.3) for the thermophoretic force on a sphere modelled with R13. Figure 6. Dimensionless thermophoretic force from various theories as a function of Knudsen number for a heat-conducting spherical particle in a gas with a constant temperature gradient in the far field. Plots correspond to particle-to-gas thermal conductivity ratios of (a) $\unicode[STIX]{x1D6EC}=4$ , (b) 10 and (c) $22.4\times 10^{3}$ . The experimental data of Bosworth & Ketsdever (2016) for ABS spheres ( $\unicode[STIX]{x1D6EC}=10$ ) and of Bosworth et al. (2016) for copper spheres ( $\unicode[STIX]{x1D6EC}=22.4\times 10^{3}$ ) are included in (b) and (c), respectively. The model by Sone & Aoki (1983) assumes a uniform temperature in the entire sphere ( $\unicode[STIX]{x1D6EC}\rightarrow \infty$ ). Predictions from Waldmann's (1959) model, valid for the free-molecule regime ( $Kn\gg 1$ ) and independent of $\unicode[STIX]{x1D6EC}$ , are also included. The thin-dashed line indicates the zero-force level. Figure 7. Profiles of (a) radial velocity, (b) polar velocity, (c) density deviation and (d) temperature deviation in the gas as functions of the radial coordinate for the problem of streaming flow past a sphere with a uniform temperature (i.e. $\unicode[STIX]{x1D6EC}\rightarrow \infty$ ). Results are from the R13 exact solutions obtained here and from Torrilhon (2010), the numerical solution of Sone (2007) for $k=0.1$ , and his asymptotic expression for $k\rightarrow 0$ . Sone's results are for a hard-sphere gas. Knudsen number $Kn$ is related to parameter $k$ as $Kn=\sqrt{2}\unicode[STIX]{x1D6FE}_{1}k/2$ , with $\unicode[STIX]{x1D6FE}_{1}=1.270042427$ from Sone (2007); $k=0.055$ and 0.33 correspond to $Kn=0.05$ and 0.3, respectively. Table 2. Coefficients used in expression (4.7) for the drag force on a sphere caused by a slow, streaming flow modelled with R13. Figure 8. Drag force acting on a sphere due to a streaming flow non-dimensionalized with Stokes' drag versus Knudsen number resulting from the R13 moment equations for particle-to-gas heat conductivity ratios $\unicode[STIX]{x1D6EC}=4$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . Results from Torrilhon (2010) with R13; from Young (2011) with G13 using (a) Maxwell–Smoluchowski's set of coefficients and (b) an alternative set (see appendix D); from Sone (2007) using a model Boltzmann equation, and from experiments (Goldberg 1954; Allen & Raabe 1982) are also included. Figure 9. Dimensionless thermophoretic velocity $Ma$ divided by product $\sqrt{\unicode[STIX]{x03C0}/2}KnEp$ as a function of Knudsen number from the R13 moment equations and from other models in the literature. The thin-dashed line indicates the zero-velocity level. Except for Waldmann–Epstein's formula, the results are obtained for solid-to-gas heat conductivity ratios $\unicode[STIX]{x1D6EC}=4$ , 10 and $22.4\times 10^{3}$ . MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org. Send article to Kindle Available formats PDF Please select a format to send. Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox. Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive. We consider the linearized form of the regularized 13-moment equations (R13) to model the slow, steady gas dynamics surrounding a rigid, heat-conducting sphere when a uniform temperature gradient is imposed far from the sphere and the gas is in a state of rarefaction. Under these conditions, the phenomenon of thermophoresis, characterized by forces on the solid surfaces, occurs. The R13 equations, derived from the Boltzmann equation using the moment method, provide closure to the mass, momentum and energy conservation laws in the form of constitutive, transport equations for the stress and heat flux that extend the Navier–Stokes–Fourier model to include non-equilibrium effects. We obtain analytical solutions for the field variables that characterize the gas dynamics and a closed-form expression for the thermophoretic force on the sphere. We also consider the slow, streaming flow of gas past a sphere using the same model resulting in a drag force on the body. The thermophoretic velocity of the sphere is then determined from the balance between thermophoretic force and drag. The thermophoretic force is compared with predictions from other theories, including Grad's 13-moment equations (G13), variants of the Boltzmann equation commonly used in kinetic theory, and with recently published experimental data. The new results from R13 agree well with results from kinetic theory up to a Knudsen number (based on the sphere's radius) of approximately 0.1 for the values of solid-to-gas heat conductivity ratios considered. However, in this range of Knudsen numbers, where for a very high thermal conductivity of the solid the experiments show reversed thermophoretic forces, the R13 solution, which does result in a reversal of the force, as well as the other theories predict significantly smaller forces than the experimental values. For Knudsen numbers between 0.1 and 1 approximately, the R13 model of thermophoretic force qualitatively shows the trend exhibited by the measurements and, among the various models considered, results in the least discrepancy. Thermophoresis refers to the force and, potentially, motion experienced by solid particles or surfaces exposed to gas under rarefied conditions and in the presence of a temperature gradient. This phenomenon seems to have been first noted by Tyndall (1870) while observing the spatial redistribution of ambient dust in the proximity of a heated surface. Commonly, the thermophoretic force has been assumed to point from the hot to the cold region, that is, opposite to the temperature gradient. This condition has been labelled as 'positive' thermophoresis. On the other hand, a reversal of the force direction, known as 'negative' thermophoresis, has also been predicted and – although rarely – observed, as discussed below. Thermophoresis belongs to a class of phenomena promoted in a gas away from thermodynamic equilibrium, which occurs when the collisions between gas molecules are insufficient. Among the various effects in this class are the velocity slip and temperature jump at solid walls and liquid–gas interfaces, transpiration flow, thermal stress, Knudsen layers and heat flux without temperature gradients (Sone 2007; Struchtrup & Torrilhon 2008). Review papers (Talbot et al. 1980; Bakanov 1991; Zheng 2002) and book sections (Sone 2007) have been written on the subject of thermophoresis. In particular, Young (2011) presented a comprehensive examination of the various theories on particle thermophoresis at arbitrary Knudsen numbers under the light of experimental data, and concluded that the accuracy of the measurements and the interval of Knudsen numbers explored are such that confirmation of the validity of the theories could not be achieved. The main measure of departure from local thermodynamic equilibrium in a gas is the Knudsen number, defined as the ratio of the mean free path between collisions and a characteristic macroscopic length scale. Typically, it is accepted that for Knudsen numbers below 0.01 the Navier–Stokes equations and the Fourier law for conductive heat flux are reliable. Beyond this threshold, because departure from equilibrium may be significant, predictions from these models become doubtful and models from kinetic theory, based on the Boltzmann equation for the molecular velocity distribution function, take precedence. For Knudsen numbers of order unity or higher, solution of the Boltzmann equation either directly or by means of stochastic techniques such as the direct simulation Monte Carlo method (DSMC) of Bird (1994) are computationally tractable, in general. On the other hand, in the interval of Knudsen numbers between, roughly, 0.01 and 1 such computations become increasingly expensive (Torrilhon 2016). Since for gas flows confined in micro- or nano-devices the Knudsen number may lie in the transition regime or beyond, it is important to resort to models capable of describing, at least qualitatively, rarefaction effects when designing or analysing such systems. Other examples include the transport of particles or droplets by a gaseous stream, or of bubbles by a liquid, when the size of these solid or fluid objects lies in the micrometre or nanometre range. Although the Navier–Stokes–Fourier equations of classical hydrodynamics can be extended into the transition regime for Knudsen numbers beyond 0.01 by including slip and jump effects in the boundary conditions, not all rarefaction effects occurring in the bulk can be captured by this approximation (Mohammadzadeh et al. 2015; Torrilhon 2016). Another aspect relevant to modelling micro- or nano-flows is that, for such small scales, the flow Reynolds number is typically smaller than one, so that neglecting inertia is an admissible assumption. Efforts to model thermophoresis or related phenomena can be traced back for more than a century. The first theory on gas motion induced by temperature gradients was due to Maxwell (1879). His analysis stems from kinetic theory. Later, Epstein (1929) presented a theory for the thermophoretic force on a spherical particle for small Knudsen numbers based on the continuum approach that takes into account thermal slip as well as the particle's thermal conductivity. Waldmann (1959) derived a model valid in the free-molecule regime (large Knudsen numbers) that has been widely applied. Brock (1962) improved Epstein's theory to develop an expression for the thermophoretic force for Knudsen numbers ${\lesssim}$ 0.1. Talbot et al. (1980) proposed a correlation for the transition regime by modifying the coefficients in Brock's expression such that Waldmann's free-molecule regime expression was approached at large Knudsen numbers. A commonly used route to study gas flow in the transition regime from first principles, since numerical solutions of the Boltzmann equation can be computationally very costly, has been the analysis of linearized versions of the Boltzmann equation with simpler models for the collision term. Some of the simpler models applied to the problem of thermophoresis of a spherical particle include the Bhatnager, Gross, and Krook model (BGK) (Sone & Aoki 1983; Yamamoto & Ishihara 1988; Takata et al. 1993; Sone 2007) the S model (Beresnev & Chernyak 1995) and the hard-sphere model (Sone 2007). An alternative approach to model rarefied gas flow is the use of macroscopic transport equations derived using moment methods. In the original moment method, introduced by Grad (1949), the distribution function in the Boltzmann equation was expanded in Hermite polynomials and the macroscopic variables describing the flow were represented as moments (integrals) of this distribution. For the first approximation beyond the Navier–Stokes–Fourier equations, in total, thirteen moments are needed for the same number of fields, namely, mass density, macroscopic velocity vector, temperature, heat-flux vector and deviatoric stress tensor (symmetric and trace free), yielding Grad's 13-moment equations (G13). The pressure appears through an equation of state, typically, the ideal gas law. Dwyer (1967) presented a theory for the thermophoretic force on a spherical particle based on Grad's moment method predicting, for the first time, reversed thermophoresis. Recently, Young (2011) noted that Dwyer did not account for the totality of the stress and heat-flux coupling terms in the temperature jump boundary condition. Young corrected this and rederived the expression for thermophoretic force from the G13 equations. He then modified this result to present an interpolation formula that fits Waldmann's expression for large Knudsen numbers and, by introducing values for the thermal creep, velocity slip, and temperature jump coefficients cited by Sharipov (2004) based on solutions of model Boltzmann equations, also matches results from kinetic theory for small Knudsen numbers. A notorious deficiency of the G13 equations is their inability to describe Knudsen layers, that is, regions adjacent to solid surfaces where rarefaction effects are conspicuous. Struchtrup & Torrilhon (2003) regularized Grad's 13-moment equations (R13) by adding second-order derivatives to the closures. Starting with the Boltzmann equation, R13 equations are best derived using the order of magnitude method (Struchtrup 2005a ; Struchtrup et al. 2017). The R13 equations are equipped with a set of boundary conditions which are valid at walls or gas–liquid interfaces without mass transfer (Torrilhon & Struchtrup 2008) or at evaporating and condensing interfaces (Struchtrup & Frezzotti 2016; Struchtrup et al. 2017). In contrast to G13, these equations can partially capture the structure and effects of the Knudsen layer. Because they involve the typical variables describing fluid flow and heat transfer at the macroscopic level, interpretation of results may be facilitated by inspecting specific terms in the differential equations, contributing to a good physical understanding. In addition, the extension of tested, well-established numerical techniques in computational fluid dynamics to this system of equations may be achieved straightforwardly. Also, because the boundary conditions have been derived for sharp surfaces of discontinuity, R13 equations can be applied to two-phase systems involving a rarefied gas and a liquid sharing an interface whose instantaneous position is another unknown in the problem. With R13, the limit of continuum models to give meaningful results when rarefaction effects are important has been pushed to a Knudsen number of approximately 0.5 in the transition regime (Torrilhon 2016; Struchtrup et al. 2017). Early advances in the development of moment methods, with emphasis on R13, can be found in the textbook by Struchtrup (2005b ); more recent developments and applications of R13 have been compiled in the reviews by Struchtrup & Taheri (2011) and Torrilhon (2016). To the best of our knowledge, application of R13 to investigate transport phenomena involving spherical or near spherical particles in rarefied environments is limited to the analytical work of Torrilhon (2010) on the slow flow of gas past a sphere, and to the numerical treatment of the same problem by Claydon et al. (2017) using a mesh-free method. The application of R13 to model thermophoresis on a spherical particle has not yet been pursued. After recognizing the complexity of the R13 equations in comparison to G13, Young (2011) carried out the modelling of this problem with the latter. In the conclusions of his article, he recommends 'solving the R13-moment equations in order to study reversed thermophoresis in greater detail'. The aim of this work is to obtain the thermophoretic force acting on a sphere surrounded by a rarefied gas exposed to a uniform temperature gradient far from the sphere using the R13 equations and taking into account heat conductivity inside the sphere. In addition, we compute the thermophoretic velocity of the sphere when this is free to move under the thermophoretic force. This velocity corresponds to the balance between the thermophoretic force and the drag caused on the sphere in its motion by the surrounding gas. To model the drag, we expand the scope in Torrilhon's (2010) work with R13 to include the thermal conductivity of the solid (i.e. a non-isothermal sphere), even though the drag has been shown to be fairly insensitive to changes in this parameter (Sone 2007). Instead of using the form of the solution for classical Stokes flow past a sphere to write the ansatz for the equations (cf. Torrilhon 2010), we apply a somewhat more general approach, namely, the method of multipole potentials. We present closed-form expressions for the thermophoretic force and the drag resulting from R13. We compare predictions for the thermophoretic force from this model with results from simplified models of the Boltzmann equation used in kinetic theory and from other systems of moment equations. We include in the comparison the new experimental data by Bosworth et al. (2016) for both positive and negative thermophoresis. The content of this paper is organized as follows. In the next section, we separately formulate the problems of thermophoresis of a spherical particle and of slow flow of a rarefied gas past a sphere and introduce the main tool of analysis, the regularized 13-moment equations, or R13, and their boundary conditions for the gas–solid interface in linearized form. We then rewrite the system of equations in a different form, more convenient for the solution method of the following section. In § 3, we describe the solution of the system of equations; the procedure involves the method of multipole potentials. Next, § 4 begins by discussing results for the problem of thermophoresis on a sphere, including spatial profiles for the macroscopic field variables, contour plots and streamline patterns and the thermophoretic force, exploring the effect of the solid-to-gas thermal conductivity ratio. For the force, we compare results from R13 with recent experimental data showing reversed thermophoresis and with other models from the literature. Then, we present results for the drag force arising from the gas flow past a sphere from R13 considering the sphere's heat conductivity and compare with theoretical predictions from the literature, including Torrilhon's (2010) R13 results in the case of an isothermal particle, which serves as validation of the solution method implemented here. The section closes presenting results for the thermophoretic velocity from R13 and other models. Finally, § 5 contains some concluding remarks. In this section we formulate mathematically two problems involving a sphere in a rarefied gas. First, we consider the problem of thermophoresis of a spherical particle with the gas far from the sphere at rest with a uniform temperature gradient. Second, we detail the problem of a uniform flow past a sphere with no temperature gradient imposed in the far field. The results of these two problems will be combined later to obtain the thermophoretic velocity of a sphere. Consider a gas at rest with uniform, constant pressure $p_{0}^{\ast }$ and temperature $T_{0}^{\ast }$ surrounding a sphere of radius $a^{\ast }$ at the same temperature and motionless with respect to the laboratory frame. Under these conditions, the gas is in equilibrium with vanishing heat flux and deviatoric stress. In energy units, the temperature in this state is given by $\unicode[STIX]{x1D703}_{0}^{\ast }=R^{\ast }T_{0}^{\ast }$ , where $R^{\ast }$ is the gas specific constant. Suppose that this state of equilibrium is disturbed by imposing, far from the sphere, a temperature field, $T_{0}^{\ast }+z^{\ast }(\unicode[STIX]{x2202}T^{\ast }/\unicode[STIX]{x2202}z^{\ast })_{\infty }$ , with $(\unicode[STIX]{x2202}T^{\ast }/\unicode[STIX]{x2202}z^{\ast })_{\infty }$ constant, and where plane $z^{\ast }=0$ passes through the sphere's centre (figure 1). The viscosity and thermal conductivity coefficient of the gas evaluated at the equilibrium state are denoted by $\unicode[STIX]{x1D707}_{0}^{\ast }$ and $k_{0}^{\ast }$ , respectively. The thermal conductivity coefficient for the sphere's material is denoted by $k_{s}^{\ast }$ (constant). The gas is assumed to be ideal, so that in the equilibrium state the gas density is $\unicode[STIX]{x1D70C}_{0}^{\ast }=p_{0}^{\ast }/\unicode[STIX]{x1D703}_{0}^{\ast }$ . Assume also that the ratio of the gas molecules' mean free path to the sphere radius is such that rarefaction effects cannot be ignored. On the basis of this ratio, we introduce a Knudsen number (2.1) $$\begin{eqnarray}Kn=\frac{\unicode[STIX]{x1D707}_{0}^{\ast }\,\unicode[STIX]{x1D703}_{0}^{\ast \,1/2}}{p_{0}^{\ast }\,a^{\ast }}.\end{eqnarray}$$ Note that, from kinetic theory, a commonly used definition for the gas mean free path in the undisturbed state is $(\unicode[STIX]{x03C0}/2)^{1/2}\unicode[STIX]{x1D707}_{0}^{\ast }\,\unicode[STIX]{x1D703}_{0}^{\ast \,1/2}/p_{0}^{\ast }$ . In definition of (2.1), we have dropped the factor $(\unicode[STIX]{x03C0}/2)^{1/2}$ for simplicity. In what follows, we consider the governing equations for a monatomic gas composed of Maxwell molecules. In this case, the Prandtl number $Pr=2/3$ . From the well-known definition of $Pr$ , a useful relationship between the gas thermal conductivity and dynamic viscosity can be obtained (Struchtrup 2005b ) (2.2) $$\begin{eqnarray}k_{0}^{\ast }=\frac{5}{2}\frac{\unicode[STIX]{x1D707}_{0}^{\ast }R^{\ast }}{Pr}=\frac{15}{4}\unicode[STIX]{x1D707}_{0}^{\ast }R^{\ast }.\end{eqnarray}$$ For a monatomic gas, the ratio of specific heats is $\unicode[STIX]{x1D6FE}=5/3$ (e.g. see Young 2011). With the aim of modelling flow and heat transfer phenomena in a rarefied gas, we consider the conservation laws for mass, momentum and energy, supplemented by the constitutive equations for the deviatoric stress and heat flux from the R13 theory, and the associated augmented set of boundary conditions for a gas–solid interface. We are interested here in the steady-state gas flow and temperature fields, in the gas and solid, resulting from the far-field temperature gradient and gas rarefaction. Assuming that the dimensionless group $a^{\ast }(\unicode[STIX]{x2202}T^{\ast }/\unicode[STIX]{x2202}z^{\ast })_{\infty }/T_{0}^{\ast }\ll 1$ , we can model the transport phenomena using the linearized version of the governing equations written in terms of deviations from the equilibrium state. The non-dimensional form of these deviations can be written as (2.3) $$\begin{eqnarray}\left.\begin{array}{@{}c@{}}p=p^{\ast }/p_{0}^{\ast },\quad \unicode[STIX]{x1D703}=\unicode[STIX]{x1D703}^{\ast }/\unicode[STIX]{x1D703}_{0}^{\ast },\quad \unicode[STIX]{x1D70C}=\unicode[STIX]{x1D70C}^{\ast }/\unicode[STIX]{x1D70C}_{0}^{\ast },\\ \text{u}=\text{u}^{\ast }/\unicode[STIX]{x1D703}_{0}^{\ast \,1/2},\quad \text{q}=\text{q}^{\ast }/(p_{0}^{\ast }\unicode[STIX]{x1D703}_{0}^{\ast \,1/2}),\quad \unicode[STIX]{x1D748}=\unicode[STIX]{x1D748}^{\ast }/p_{0}^{\ast },\end{array}\right\}\end{eqnarray}$$ for the pressure, temperature, density, velocity, heat flux and deviatoric stress, respectively, in the gas. The deviatoric stress is symmetric and trace free. Length is non-dimensionalized with the sphere's radius $a^{\ast }$ . The temperature deviation in the sphere $\unicode[STIX]{x1D703}_{s}^{\ast }$ is non-dimensionalized as that for the gas. The dimensionless form of the temperature gradient defines a new dimensionless group, the Epstein number – coined by Young (2011) after P. S. Epstein, a pioneer in the study of thermophoresis of spherical particles, who presented the first theory on the subject (Epstein 1929). It is given by (2.4) $$\begin{eqnarray}Ep=\frac{a^{\ast }(\unicode[STIX]{x2202}T^{\ast }/\unicode[STIX]{x2202}z^{\ast })_{\infty }}{T_{0}^{\ast }}.\end{eqnarray}$$ Note that the non-dimensional pressure and temperature fields in the gas are given by $1+p$ and $1+\unicode[STIX]{x1D703}$ , respectively, whereas the temperature in the sphere is $1+\unicode[STIX]{x1D703}_{s}$ . On the other hand, $\text{ u}$ , $\text{q}$ and $\unicode[STIX]{x1D748}$ in (2.3) determine the actual velocity, heat flux and deviatoric stress in the gas. The linearized, steady conservation equations of gas, momentum and energy are (2.5a ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D735}\boldsymbol{\cdot }\text{u}=0, & \displaystyle\end{eqnarray}$$ (2.5b ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D735}p+\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}=0, & \displaystyle\end{eqnarray}$$ (2.5c ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D735}\boldsymbol{\cdot }\text{q}=0, & \displaystyle\end{eqnarray}$$ whilst the R13 constitutive equations for the deviatoric stress and heat flux are given by (Struchtrup 2005b ; Lockerby & Collyer 2016; Torrilhon 2016) (2.6a ) $$\begin{eqnarray}\displaystyle & \displaystyle \left(1-{\textstyle \frac{2}{3}}Kn^{2}\unicode[STIX]{x1D6E5}\right)\unicode[STIX]{x1D748}-{\textstyle \frac{4}{5}}Kn^{2}\overline{\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}}=-2Kn\overline{\unicode[STIX]{x1D735}\text{u}}-{\textstyle \frac{4}{5}}Kn\overline{\unicode[STIX]{x1D735}\text{q}}, & \displaystyle\end{eqnarray}$$ (2.6b ) $$\begin{eqnarray}\displaystyle & \displaystyle \left(1-{\textstyle \frac{9}{5}}Kn^{2}\unicode[STIX]{x1D6E5}\right)\text{q}=-{\textstyle \frac{15}{4}}Kn\unicode[STIX]{x1D735}\unicode[STIX]{x1D703}-{\textstyle \frac{3}{2}}Kn\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}. & \displaystyle\end{eqnarray}$$ The overbar in expressions (2.6) denotes a symmetric and trace-free tensor; $\unicode[STIX]{x1D6E5}$ denotes the Laplacian operator. The temperature in the spherical particle satisfies the classical steady heat equation (2.7) $$\begin{eqnarray}\unicode[STIX]{x0394}\unicode[STIX]{x1D703}_{s}=0.\end{eqnarray}$$ Far from the sphere's surface, the imposed disturbance is represented by the uniform temperature gradient $Ep\,\text{k}$ , where $\text{k}$ designates the unit vector pointing in the direction of the positive $z$ semi-axis. It is convenient to write the problem in such a way that deviations from the base (equilibrium) state vanish in the far field. For this purpose we introduce the transformations (2.8a-c ) $$\begin{eqnarray}\unicode[STIX]{x1D703}=\check{\unicode[STIX]{x1D703}}+Ep\,z,\quad \unicode[STIX]{x1D703}_{s}=\check{\unicode[STIX]{x1D703}}_{s}+Ep\,z,\quad \text{q}=\check{\text{q}}-{\textstyle \frac{15}{4}}Kn\,Ep\,\text{k},\end{eqnarray}$$ whereas the rest of the variables are left unchanged. With (2.8), expressions (2.5)–(2.7) remain invariant in form. On the other hand, the boundary conditions do change after using (2.8). Note that subscript ' $s$ ' will be used to denote quantities in the solid sphere. Equations (2.5)–(2.7) will be solved subjected to the following boundary conditions taken from Struchtrup et al. (2017) in the absence of phase change (see also Struchtrup & Frezzotti 2016) at the solid–gas interface $r=1$ , with $r=|\text{x}|$ , where $\text{x}$ is the position vector with origin at the sphere's centre. The quantities at the interface corresponding to the liquid in Struchtrup et al. (2017) are, in the present work, associated with the solid (sphere). Note that in this case, boundary conditions (37)–(41) in Struchtrup et al. (2017) reduce to the wall boundary conditions (33)–(37) in Torrilhon & Struchtrup (2008) if the gas flow is two-dimensional. In all these equations for the boundary the accommodation coefficient has been set equal to one and $\unicode[STIX]{x1D703}$ , $\unicode[STIX]{x1D703}_{s}$ and $\text{q}$ have been substituted according to (2.8). The generalized slip condition is (2.9) $$\begin{eqnarray}\unicode[STIX]{x1D70E}_{t_{\unicode[STIX]{x1D6FC}}\,n}=\,-\left(\frac{2}{\unicode[STIX]{x03C0}}\right)^{1/2}\left[(u_{t_{\unicode[STIX]{x1D6FC}}}-u_{s,t_{\unicode[STIX]{x1D6FC}}})+\frac{1}{5}\left(\check{q}_{t_{\unicode[STIX]{x1D6FC}}}-\frac{15}{4}Kn\,Ep\,\text{k}\boldsymbol{\cdot }\text{t}_{\unicode[STIX]{x1D6FC}}\right)+\frac{1}{2}\text{m}_{t_{\unicode[STIX]{x1D6FC}}\,nn}\right].\end{eqnarray}$$ The generalized temperature jump condition is (2.10) $$\begin{eqnarray}\check{q}_{n}-\frac{15}{4}Kn\,Ep\,\text{k}\boldsymbol{\cdot }\text{n}=-\left(\frac{2}{\unicode[STIX]{x03C0}}\right)^{1/2}\left[2(\check{\unicode[STIX]{x1D703}}-\check{\unicode[STIX]{x1D703}}_{s})+\frac{1}{2}\unicode[STIX]{x1D70E}_{nn}+\frac{5}{28}\text{R}_{nn}\right].\end{eqnarray}$$ The generalized interface conditions for higher moments are (2.11) $$\begin{eqnarray}\displaystyle & \displaystyle \text{m}_{nnn}=\left(\frac{2}{\unicode[STIX]{x03C0}}\right)^{1/2}\left[\frac{2}{5}(\check{\unicode[STIX]{x1D703}}-\check{\unicode[STIX]{x1D703}}_{s})-\frac{7}{5}\unicode[STIX]{x1D70E}_{nn}-\frac{1}{14}\text{R}_{nn}\right], & \displaystyle\end{eqnarray}$$ (2.12) $$\begin{eqnarray}\displaystyle & \displaystyle \text{m}_{t_{\unicode[STIX]{x1D6FC}}t_{\unicode[STIX]{x1D6FD}}n}=-\left(\frac{2}{\unicode[STIX]{x03C0}}\right)^{1/2}\left[\unicode[STIX]{x1D70E}_{t_{\unicode[STIX]{x1D6FC}}t_{\unicode[STIX]{x1D6FD}}}+\frac{1}{14}\text{R}_{t_{\unicode[STIX]{x1D6FC}}t_{\unicode[STIX]{x1D6FD}}}+\frac{1}{5}\{(\check{\unicode[STIX]{x1D703}}-\check{\unicode[STIX]{x1D703}}_{s})-\unicode[STIX]{x1D70E}_{nn}\}\unicode[STIX]{x1D6FF}_{\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D6FD}}\right], & \displaystyle\end{eqnarray}$$ (2.13) $$\begin{eqnarray}\text{R}_{t_{\unicode[STIX]{x1D6FC}}\,n}=\left(\frac{2}{\unicode[STIX]{x03C0}}\right)^{1/2}\left[(u_{t_{\unicode[STIX]{x1D6FC}}}-u_{s,t_{\unicode[STIX]{x1D6FC}}})-\frac{11}{5}\left(\check{q}_{t_{\unicode[STIX]{x1D6FC}}}-\frac{15}{4}Kn\,Ep\,\text{k}\boldsymbol{\cdot }\text{t}_{\unicode[STIX]{x1D6FC}}\right)-\frac{1}{2}\text{m}_{t_{\unicode[STIX]{x1D6FC}}\,nn}\right].\end{eqnarray}$$ Furthermore, the solution must also satisfy the non-penetration condition (2.14) $$\begin{eqnarray}u_{n}=0,\end{eqnarray}$$ and the interfacial linearized energy balance (Young 2011) (2.15) $$\begin{eqnarray}\check{q}_{n}+{\textstyle \frac{15}{4}}Kn\,(\unicode[STIX]{x1D6EC}-1)Ep\,\text{n}\boldsymbol{\cdot }\text{k}=-{\textstyle \frac{15}{4}}Kn\,\unicode[STIX]{x1D6EC}\,\text{n}\boldsymbol{\cdot }\unicode[STIX]{x1D735}\check{\unicode[STIX]{x1D703}}_{s},\end{eqnarray}$$ where $\unicode[STIX]{x1D6EC}$ is the solid-to-gas thermal conductivity ratio $k_{s}^{\ast }/k_{0}^{\ast }$ . Here, indices $\unicode[STIX]{x1D6FC}$ and $\unicode[STIX]{x1D6FD}$ can take a value of 1 or 2; $\text{n}$ is the unit vector normal to the interface pointing into the gas; $\text{t}_{1}$ and $\text{t}_{2}$ represent two mutually orthogonal unit vectors tangential to the interface, and subscripts $n$ , $t_{1}$ and $t_{2}$ denote components in the corresponding normal or any of the two tangential directions, respectively. In these interfacial conditions one must set $u_{s,t_{1}}=u_{s,t_{2}}=0$ . Regarding the far field, all deviations from the basic equilibrium state must vanish as $r\rightarrow \infty$ . Boundary conditions (2.14) and (2.15) hold for the interface between a fluid and an impenetrable solid. Their counterparts for a fluid–fluid interface are given by the conditions for mass and energy conservation presented, for instance, in appendix A of Struchtrup et al. (2017) – note that in their expression (A5), the internal energy should be written instead of the enthalpy. The interfacial conditions contain the components of the higher-order moments $\text{R}$ and $\text{m}$ , a rank-two and rank-three tensor, respectively. These are defined as (2.16a ) $$\begin{eqnarray}\displaystyle & \text{R}=-{\textstyle \frac{24}{5}}Kn\,\overline{\unicode[STIX]{x1D735}\check{\text{q}}}, & \displaystyle\end{eqnarray}$$ (2.16b ) $$\begin{eqnarray}\displaystyle & \text{m}=-2Kn\,\overline{\unicode[STIX]{x1D735}\unicode[STIX]{x1D748}}. & \displaystyle\end{eqnarray}$$ The full nonlinear expressions for the heat flux $\text{q}$ , deviatoric stress $\unicode[STIX]{x1D748}$ and higher-order moments $\text{R}$ and $\text{m}$ that lead to (2.6) and (2.16) can be found in Struchtrup (2005b , see chapters 7 and 9) – there, a third (scalar) higher-order moment is also present that contributes nothing to the linear equations when $\text{q}$ is divergence free. Once these equations are linearized, relations (2.16) are then used to eliminate $\text{R}$ and $\text{m}$ resulting in (2.6). It is important to note that because $\unicode[STIX]{x1D748}$ , $\text{R}$ and $\text{m}$ are trace-free tensors, we have that $\unicode[STIX]{x1D70E}_{nn}=-\unicode[STIX]{x1D70E}_{t_{1}t_{1}}-\unicode[STIX]{x1D70E}_{t_{2}t_{2}}$ , $\text{R}_{nn}=-\text{R}_{t_{1}t_{1}}-\text{R}_{t_{2}t_{2}}$ and $\text{m}_{nnn}=-\text{m}_{t_{1}t_{1}n}-\text{m}_{t_{2}t_{2}n}$ . Using these constraints, we can easily show that boundary condition (2.11) can be obtained by writing (2.12) twice, first for $\text{m}_{t_{1}t_{1}n}$ and then for $\text{m}_{t_{2}t_{2}n}$ , and adding the resulting expressions. Therefore, it suffices for the solution to satisfy only one of these equations provided (2.11) is also satisfied. Since these boundary conditions will be applied to a spherical interface, it is fitting to introduce a system of spherical-polar coordinates $(r,\unicode[STIX]{x1D717},\unicode[STIX]{x1D711})$ with its origin, $r=0$ , located at the centre of the solid sphere (figure 1). Here, $0\leqslant r<\infty$ , $0\leqslant \unicode[STIX]{x1D717}\leqslant \unicode[STIX]{x03C0}$ , and $0\leqslant \unicode[STIX]{x1D711}<2\unicode[STIX]{x03C0}$ . Semi-axis $\unicode[STIX]{x1D717}=0$ coincides with the positive $z$ semi-axis. To these coordinate directions correspond unit vectors $\hat{\text{r}}$ , $\hat{\unicode[STIX]{x1D751}}$ and $\hat{\unicode[STIX]{x1D753}}$ , respectively; these triplet forms an orthogonal set. Thus $\{\hat{\text{r}},\hat{\unicode[STIX]{x1D751}},\hat{\unicode[STIX]{x1D753}}\}$ take the place of the set $\{\text{n},\text{t}_{1},\text{t}_{2}\}$ when writing the boundary conditions. The problems considered here are axisymmetric, so quantities do not vary in the $\unicode[STIX]{x1D711}$ direction. Finally, if needed, the gas density deviation from its value at equilibrium can be computed from the linearized form of the ideal gas equation of state, $p=\unicode[STIX]{x1D70C}+\unicode[STIX]{x1D703}$ . Suppose that instead of prescribing a far-field temperature gradient, the state of equilibrium of the gas described in the previous section is disturbed by imposing, far from the sphere, a uniform flow with constant velocity $U_{0}^{\ast }$ in the $z^{\ast }$ -direction. We can model this flow, including rarefaction effects, by using the linearized conservation laws, R13 constitutive relations and boundary conditions written for the disturbances, provided the dimensionless (Reynolds) number $\unicode[STIX]{x1D70C}_{0}^{\ast }U_{0}^{\ast }a^{\ast }/\unicode[STIX]{x1D707}_{0}^{\ast }\ll 1$ . Again, steady state will be assumed. Our work will extend Torrilhon's (2010) efforts on this problem by including heat conduction throughout the sphere. Furthermore, we will present an expression for the drag force over the sphere from our analytical solution – a closed-form expression for the drag was not given in Torrilhon's work. Combining this expression for the drag with that for the thermophoretic force will result in the thermophoretic velocity when these two forces balanced each other. For convenience, we transform the original problem to the equivalent one of the disturbance flow resulting from a sphere translating with dimensionless velocity $-Ma\,\text{k}$ in a fluid at rest far away from the sphere. Here, $Ma$ is a pseudo Mach number (2.17) $$\begin{eqnarray}Ma=\frac{U_{0}^{\ast }}{\unicode[STIX]{x1D703}_{0}^{\ast \,1/2}}.\end{eqnarray}$$ The actual Mach number can be obtained by multiplying $Ma$ by $\unicode[STIX]{x1D6FE}^{-1/2}$ , with $\unicode[STIX]{x1D6FE}$ the specific heat ratio. We thus set $\text{u}=\check{\text{u}}+Ma\text{k}$ , so that $\check{\text{u}}\rightarrow 0$ as $r\rightarrow \infty$ . The governing equations and boundary conditions for this problem are obtained from those in § 2.1 by writing $\check{\text{u}}$ instead of $\text{u}$ , dropping the ' $\check{~}\,$ ' from $\check{\unicode[STIX]{x1D703}}$ , $\check{\unicode[STIX]{x1D703}}_{s}$ and $\check{\text{q}}$ and setting $Ep=0$ , ${\check{u}}_{s,t_{1}}={\check{u}}_{s,\unicode[STIX]{x1D717}}=-Ma\,\text{k}\boldsymbol{\cdot }\hat{\unicode[STIX]{x1D751}}$ and ${\check{u}}_{s,t_{2}}={\check{u}}_{s,\unicode[STIX]{x1D711}}=0$ . In addition, instead of (2.14) we must have (2.18) $$\begin{eqnarray}{\check{u}}_{n}=-Ma\,\text{k}\boldsymbol{\cdot }\text{n}.\end{eqnarray}$$ Hereinafter, we drop symbol ' $\check{~}\,$ ' unless otherwise noted. We shall proceed to rewrite the linearized R13-moment equations introduced previously as an equivalent Stokes–Fourier set of equations supplemented by non-homogeneous elliptic equations for the pressure, heat flux and deviatoric stress. To this alternative system of equations, we can apply, somewhat straightforwardly, analytical tools used successfully in scalar and, more notably, vector equations appearing in slow flow hydrodynamics. By introducing the auxiliary variables (2.19) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D755} & = & \displaystyle \text{u}+{\textstyle \frac{2}{5}}\text{q},\end{eqnarray}$$ (2.20) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D701} & = & \displaystyle \unicode[STIX]{x1D703}-{\textstyle \frac{2}{5}}p,\end{eqnarray}$$ it can be readily shown that expressions (2.5)–(2.6) can be written as the equivalent Stokes–Fourier system (2.21a ) $$\begin{eqnarray}\displaystyle & \unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D755}=0, & \displaystyle\end{eqnarray}$$ (2.21b ) $$\begin{eqnarray}\displaystyle & \unicode[STIX]{x1D735}\unicode[STIX]{x1D6F1}+\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D72E}=0, & \displaystyle\end{eqnarray}$$ (2.21c ) $$\begin{eqnarray}\displaystyle & \unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D734}=0, & \displaystyle\end{eqnarray}$$ (2.22a ) $$\begin{eqnarray}\displaystyle & \unicode[STIX]{x1D72E}=-2Kn\overline{\unicode[STIX]{x1D735}\unicode[STIX]{x1D755}}, & \displaystyle\end{eqnarray}$$ (2.22b ) $$\begin{eqnarray}\displaystyle & \unicode[STIX]{x1D734}=-{\textstyle \frac{15}{4}}Kn\unicode[STIX]{x1D735}\unicode[STIX]{x1D701}, & \displaystyle\end{eqnarray}$$ coupled with the set of equations (2.23) $$\begin{eqnarray}\displaystyle & \displaystyle (\unicode[STIX]{x1D6E5}-\unicode[STIX]{x1D706}_{1}^{2})p=-\unicode[STIX]{x1D706}_{1}^{2}\unicode[STIX]{x1D6F1},\quad \unicode[STIX]{x1D706}_{1}=(5/6)^{1/2}Kn^{-1}, & \displaystyle\end{eqnarray}$$ (2.24) $$\begin{eqnarray}\displaystyle & \displaystyle (\unicode[STIX]{x1D6E5}-\unicode[STIX]{x1D706}_{2}^{2})\text{q}=-\unicode[STIX]{x1D706}_{2}^{2}\unicode[STIX]{x1D734},\quad \unicode[STIX]{x1D706}_{2}=(5/9)^{1/2}Kn^{-1}, & \displaystyle\end{eqnarray}$$ (2.25) $$\begin{eqnarray}\displaystyle & \displaystyle (\unicode[STIX]{x1D6E5}-\unicode[STIX]{x1D706}_{3}^{2})\unicode[STIX]{x1D748}=-\unicode[STIX]{x1D706}_{3}^{2}\,\unicode[STIX]{x1D72E}+{\textstyle \frac{4}{5}}Kn^{2}\unicode[STIX]{x1D706}_{3}^{2}\overline{\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}p},\quad \unicode[STIX]{x1D706}_{3}=(3/2)^{1/2}Kn^{\,-1}. & \displaystyle\end{eqnarray}$$ Furthermore, because of (2.21a ) and (2.21c ), (2.21b ) and (2.22b ) lead to (2.26) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x0394}\unicode[STIX]{x1D6F1}=0, & \displaystyle\end{eqnarray}$$ (2.27) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x0394}\unicode[STIX]{x1D701}=0, & \displaystyle\end{eqnarray}$$ and, with (2.22a ), expression (2.21b ) becomes (2.28) $$\begin{eqnarray}\unicode[STIX]{x0394}\unicode[STIX]{x1D755}=Kn^{-1}\unicode[STIX]{x1D735}\,\unicode[STIX]{x1D6F1}.\end{eqnarray}$$ Equations (2.21a ) and (2.21c ) result from (2.5a ), (2.5c ) and from the divergence of (2.6b ). Expression (2.21b ) is obtained by taking the divergence of (2.6a ), using (2.5b ) to eliminate $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}$ in favour of $\unicode[STIX]{x1D735}p$ , collecting the terms containing this vector and introducing $\unicode[STIX]{x1D6F1}$ defined in (2.23). Finally, expressions (2.24) and (2.25) are simply (2.6a ) and (2.6b ) written in terms of auxiliary variables $\unicode[STIX]{x1D734}$ and $\unicode[STIX]{x1D72E}$ , respectively. Rewriting the R13-moment equations in a way that includes the more familiar form of (2.21) or, more recognizably, (2.26)–(2.28), will facilitate the application to this set of equations of analytical methods used in the solution of the equations for Stokes flow in vector form. We shall extend these methods to (2.23)–(2.25), which include a rank-two tensor, second-order differential equation. In this section, we seek solutions for the flow disturbances in the gas that vanish in the far field alongside temperature profiles inside the sphere which are coupled through the set of conditions prescribed at the sphere's surface. Consider partial differential equations (2.26)–(2.28) and (2.23)–(2.25) for the gas, and (2.7) for the temperature in the solid particle. Based on this, scalar fields $\unicode[STIX]{x1D6F1}$ , $\unicode[STIX]{x1D701}$ and $\unicode[STIX]{x1D703}_{s}$ are harmonic functions. For fields $\unicode[STIX]{x1D755}$ , $p$ , $\text{q}$ , $\unicode[STIX]{x1D748}$ , their solutions will be written as the sum of the solution of the homogeneous equation plus a particular solution of (2.28) and (2.23)–(2.25), respectively (e.g. see Lamb 1932; Leal 2007). The solutions of the homogeneous equations will be found with the method of multipole potentials. After these fields have been determined, the velocity and temperature fields $\text{u}$ and $\unicode[STIX]{x1D703}$ can be obtained from expressions (2.19) and (2.20), respectively. Starting with (2.28), to find a particular solution we let (3.1) $$\begin{eqnarray}\unicode[STIX]{x1D71B}_{i}^{(p)}=Kn^{-1}\unicode[STIX]{x1D6FC}_{i}\unicode[STIX]{x1D6F1},\end{eqnarray}$$ where $\unicode[STIX]{x1D6FC}_{i}$ is a vector field to be determined and the subscript $i$ denotes Cartesian index notation. Computing (3.2) $$\begin{eqnarray}\unicode[STIX]{x0394}\unicode[STIX]{x1D71B}_{i}^{(p)}=Kn^{-1}(\unicode[STIX]{x0394}\unicode[STIX]{x1D6FC}_{i})\unicode[STIX]{x1D6F1}+2Kn^{-1}\frac{\unicode[STIX]{x2202}\unicode[STIX]{x1D6FC}_{i}}{\unicode[STIX]{x2202}x_{j}}\frac{\unicode[STIX]{x2202}\unicode[STIX]{x1D6F1}}{\unicode[STIX]{x2202}x_{j}},\end{eqnarray}$$ after using the fact that $\unicode[STIX]{x1D6F1}$ is harmonic, and comparing this result with the left-hand side of (2.28), namely, $Kn^{-1}\unicode[STIX]{x2202}\unicode[STIX]{x1D6F1}/\unicode[STIX]{x2202}x_{i}$ , we find $\unicode[STIX]{x0394}\unicode[STIX]{x1D6FC}_{i}=0$ and $\unicode[STIX]{x2202}\unicode[STIX]{x1D6FC}_{i}/\unicode[STIX]{x2202}x_{j}=\unicode[STIX]{x1D6FF}_{ij}/2$ , where $\unicode[STIX]{x1D6FF}_{ij}$ is the Kronecker delta. Therefore, $\unicode[STIX]{x1D6FC}_{i}=x_{i}/2$ and (Leal 2007) (3.3) $$\begin{eqnarray}\unicode[STIX]{x1D755}^{(p)}={\textstyle \frac{1}{2}}Kn^{-1}\text{x}\,\unicode[STIX]{x1D6F1},\end{eqnarray}$$ with $\text{x}$ the position vector from the centre of the sphere. Next, for (2.23) and (2.24), solutions $p^{(p)}$ and $\text{q}^{(p)}$ can be readily found by inspection by using the fact that $\unicode[STIX]{x1D6F1}$ and $\unicode[STIX]{x1D701}$ are harmonic functions. This leads to (3.4) $$\begin{eqnarray}p^{(p)}=\unicode[STIX]{x1D6F1},\end{eqnarray}$$ (3.5) $$\begin{eqnarray}\text{q}^{(p)}=-{\textstyle \frac{15}{4}}Kn\unicode[STIX]{x1D735}\unicode[STIX]{x1D701},\end{eqnarray}$$ respectively. Finally, solution $\unicode[STIX]{x1D748}^{(p)}$ for (2.25), after eliminating $\unicode[STIX]{x1D72E}$ with (2.22a ), can be found by letting (3.6) $$\begin{eqnarray}\unicode[STIX]{x1D748}^{(p)}=2Kn\,\unicode[STIX]{x1D706}_{3}^{2}\,\overline{\unicode[STIX]{x1D735}\unicode[STIX]{x1D737}}+{\textstyle \frac{4}{5}}Kn^{2}\,\unicode[STIX]{x1D706}_{3}^{2}\,\overline{\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}\unicode[STIX]{x1D702}},\end{eqnarray}$$ such that vector field $\unicode[STIX]{x1D737}$ and scalar field $\unicode[STIX]{x1D702}$ satisfy, respectively, (3.7a ) $$\begin{eqnarray}\displaystyle & \displaystyle (\unicode[STIX]{x1D6E5}-\unicode[STIX]{x1D706}_{3}^{2})\unicode[STIX]{x1D737}=\unicode[STIX]{x1D755}, & \displaystyle\end{eqnarray}$$ (3.7b ) $$\begin{eqnarray}\displaystyle & \displaystyle (\unicode[STIX]{x1D6E5}-\unicode[STIX]{x1D706}_{3}^{2})\unicode[STIX]{x1D702}=p. & \displaystyle\end{eqnarray}$$ After adding and subtracting $\unicode[STIX]{x1D706}_{1}^{2}\unicode[STIX]{x1D702}$ to the left-hand side of (3.7b ), we seek solutions of (3.7) of the form (3.8a ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D737}=-\unicode[STIX]{x1D755}/\unicode[STIX]{x1D706}_{3}^{2}+m\unicode[STIX]{x1D735}\unicode[STIX]{x1D6F1}, & \displaystyle\end{eqnarray}$$ (3.8b ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D702}=p/(\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2})+n\unicode[STIX]{x1D6F1}. & \displaystyle\end{eqnarray}$$ Substitution of (3.8) into (3.7) results in $m=-Kn^{-1}/\unicode[STIX]{x1D706}_{3}^{4}$ and $n=-(\unicode[STIX]{x1D706}_{1}/\unicode[STIX]{x1D706}_{3})^{2}/(\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2})$ , so that (3.9a ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D737}=-{\textstyle \frac{1}{\unicode[STIX]{x1D706}_{3}^{4}}}(\unicode[STIX]{x1D706}_{3}^{2}\unicode[STIX]{x1D755}+Kn^{-1}\unicode[STIX]{x1D735}\unicode[STIX]{x1D6F1}), & \displaystyle\end{eqnarray}$$ (3.9b ) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D702}=\frac{1}{\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2}}\left(p-\frac{\unicode[STIX]{x1D706}_{1}^{2}}{\unicode[STIX]{x1D706}_{3}^{2}}\unicode[STIX]{x1D6F1}\right), & \displaystyle\end{eqnarray}$$ and $\unicode[STIX]{x1D748}^{(p)}$ is given by (3.6) with (3.9); it is symmetric and trace free. The form of the solutions of the homogeneous equations associated with (2.28) and (2.23)–(2.25) is constructed by means of the method of multipole potentials. This method is presented in the context of classical hydrodynamics and applied to problems of low Reynolds number flows in Leal (2007, chap. 8) as well as thoroughly discussed and used in a wider range of problems of mathematical physics in Hess (2015, chap. 10). In the former, it is named as the method of superposition of vector harmonic functions. The most illustrative example discussed in those sources is perhaps the problem of Stokes flow past a sphere. As we shall see, a significant advantage of the method of multipole potentials concerning our problem is that we can obtain the solution of the deviatoric stress $\unicode[STIX]{x1D748}$ without going through the complexity of writing and solving differential equations for its scalar components in spherical coordinates. The general definitions of multipole potential tensors, the form of the first few of them and properties relevant to this work are presented in appendix A. Because the governing equations and boundary conditions are linear in the perturbation fields, and the non-homogeneous terms in the boundary conditions must be linear in $G\text{k}$ , with $G=Ep$ or $G=Ma$ for the problems in §§ 2.1 or 2.2, respectively, the homogeneous solutions are constructed by adding products of multipole potentials with vector $G\text{k}$ . From the set of all multipole potential tensors, the method of multipole potential guides the selection of those potentials that should be used to construct the solution. Only those multipole potentials that conform to the rank of the unknown field (i.e. whether it is a scalar, a vector, a rank-two tensor, etc.), its symmetry, and its parity (i.e. whether it is a true scalar, true vector or true tensor or a pseudo-scalar, pseudo-vector or pseudo-tensor), after their product with $G\text{k}$ can be part of the solution. Regarding the parity attribute, in general terms, the components of a true scalar, vector or tensor change sign when subjected to an improper rotation, such as a reflection or inversion – e.g. changing the coordinate system from right-handed to left-handed – in order to continue properly describing an unchanged physical situation (Arfken et al. 2012, chap. 3). For a so-called pseudo-scalar, vector or tensor, their components do not change sign accordingly. For instance, the angular velocity is a pseudo-vector. A deeper look at this aspect and, in general, at the method of multipole potentials for constructing solutions of physical quantities is beyond the scope of the present article. For detailed discussions and a list of enlightening examples, the reader may referred to the cited books by Leal (2007) and Hess (2015). Following these considerations, using Cartesian index notation, and letting $G_{i}$ be the components of vector $G\text{k}$ , we can write for the solutions (3.10) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D6F1}=A_{1}G_{j}X_{j}, & \displaystyle\end{eqnarray}$$ (3.11) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D701}=B_{1}G_{j}X_{j}, & \displaystyle\end{eqnarray}$$ (3.12) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D71B}_{i}=C_{1}G_{i}X_{0}+C_{2}G_{j}X_{ji}+\unicode[STIX]{x1D71B}_{i}^{(p)}, & \displaystyle\end{eqnarray}$$ (3.13) $$\begin{eqnarray}\displaystyle & \displaystyle p=a(r)\,G_{j}X_{j}+p^{(p)}, & \displaystyle\end{eqnarray}$$ (3.14) $$\begin{eqnarray}\displaystyle & \displaystyle q_{i}=b(r)\,G_{i}X_{0}+c(r)\,G_{j}X_{ji}+q_{i}^{(p)}, & \displaystyle\end{eqnarray}$$ (3.15) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x1D70E}_{ij}=f(r)\,\left[{\textstyle \frac{1}{2}}(G_{i}X_{j}+G_{j}X_{i})-{\textstyle \frac{1}{3}}\unicode[STIX]{x1D6FF}_{ij}\,G_{k}X_{k}\right]+g(r)\,G_{k}X_{kij}+\unicode[STIX]{x1D70E}_{ij}^{(p)}, & \displaystyle\end{eqnarray}$$ where terms with superscript ' $(p)$ ' are given by (3.3)–(3.6); $A_{1}$ , $B_{1}$ , $C_{1}$ and $C_{2}$ , are constants to be determined by the boundary conditions, and fields $X_{0}$ , $X_{i}$ , $X_{ij},\ldots \,$ , denote the descending multipole potentials listed in appendix A. Note that in (3.13)–(3.15) we use functions of the radial coordinate $a(r)$ , $b(r)$ , $c(r)$ , $f(r)$ and $g(r)$ , instead of constants as for (3.10)–(3.12), because the solutions of the homogeneous equations for $p$ , $\text{q}$ and $\unicode[STIX]{x1D748}$ , must satisfy modified Helmholtz equations rather than Laplace equations. The descending multipole potentials satisfy Laplace equation for $r\neq 0$ . In the present problem, $r=0$ is outside the gas domain. For the interior of the sphere, the temperature profile is given by the ascending multipole potential (3.16) $$\begin{eqnarray}\unicode[STIX]{x1D703}_{s}=D_{1}r^{3}G_{j}X_{j},\end{eqnarray}$$ which introduces another constant, $D_{1}$ . Both the equivalence in tensor rank and parity between the left- and right-hand sides of (3.10)–(3.16) determined the type of products employed between the external perturbation vector $G_{i}$ and the multipole potential tensors in these expressions. Because $\unicode[STIX]{x1D701}$ is harmonic, equation (3.5) yields $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\text{q}^{(p)}=0$ . With the divergence-free vector $\text{q}=\text{q}^{(h)}+\text{q}^{(p)}$ , where superscript '( $h$ )' denotes the homogeneous solution, we are left with $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\text{q}^{(h)}=0$ . Similarly, one can show that taking the divergence of $\unicode[STIX]{x1D748}^{(p)}$ using (3.6) results in $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}^{(p)}=-\unicode[STIX]{x1D735}p$ . Therefore, since $\unicode[STIX]{x1D748}=\unicode[STIX]{x1D748}^{(h)}+\unicode[STIX]{x1D748}^{(p)}$ , and because of expression (2.5b ), we must also have $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}^{(h)}=0$ . Note in (3.15) that, by construction, $\unicode[STIX]{x1D748}^{(h)}$ is also symmetric and trace free. Taking the divergence of $\unicode[STIX]{x1D755}$ using (3.12) and (3.3), setting it equal to zero and using the definitions of the multipole potentials from appendix A results in (3.17) $$\begin{eqnarray}C_{1}={\textstyle \frac{1}{2}}Kn^{-1}A_{1}.\end{eqnarray}$$ With $\text{q}^{(h)}$ from (3.14), setting $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\text{ q}^{(h)}=0$ yields this constraint for functions $b(r)$ and $c(r)$ (3.18) $$\begin{eqnarray}r^{2}b^{\prime }+2c^{\prime }-rb=0.\end{eqnarray}$$ Furthermore, the vector equation $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}^{(h)}=0$ , with $\unicode[STIX]{x1D748}^{(h)}$ from (3.15), leads to (3.19a ) $$\begin{eqnarray}\displaystyle & \displaystyle 2r^{2}f^{\prime }-rf+18g^{\prime }=0, & \displaystyle\end{eqnarray}$$ (3.19b ) $$\begin{eqnarray}\displaystyle & 32r^{2}f^{\prime }+rf-18g^{\prime }=0, & \displaystyle\end{eqnarray}$$ so that $f^{\prime }(r)\equiv 0$ and $g^{\prime }(r)=rf(r)/18$ . Substitution of expressions (3.13)–(3.15) for $p$ , $\text{q}$ and $\unicode[STIX]{x1D748}$ into (2.23)–(2.25), respectively, and invocation of property (A 7) from appendix A, lead to the following ordinary differential equations (3.20a ) $$\begin{eqnarray}\displaystyle & \displaystyle a^{\prime \prime }-2r^{-1}a^{\prime }-\unicode[STIX]{x1D706}_{1}^{2}a=0, & \displaystyle\end{eqnarray}$$ (3.20b ) $$\begin{eqnarray}\displaystyle & \displaystyle b^{\prime \prime }-\unicode[STIX]{x1D706}_{2}^{2}b=0, & \displaystyle\end{eqnarray}$$ (3.20c ) $$\begin{eqnarray}\displaystyle & \displaystyle c^{\prime \prime }-4r^{-1}c^{\prime }-\unicode[STIX]{x1D706}_{2}^{2}c=0, & \displaystyle\end{eqnarray}$$ (3.20d ) $$\begin{eqnarray}\displaystyle & \displaystyle f^{\prime \prime }-2r^{-1}f^{\prime }-\unicode[STIX]{x1D706}_{3}^{2}f=0, & \displaystyle\end{eqnarray}$$ (3.20e ) $$\begin{eqnarray}\displaystyle & \displaystyle g^{\prime \prime }-6r^{-1}g^{\prime }-\unicode[STIX]{x1D706}_{3}^{2}g=0. & \displaystyle\end{eqnarray}$$ Using $f^{\prime }(r)\equiv 0$ in (3.20d ) yields $f(r)\equiv 0$ ; hence, from above, we have that $g^{\prime }(r)\equiv 0$ and (3.20e ) results in $g(r)\equiv 0$ . We thus find that $\unicode[STIX]{x1D748}^{(h)}(\text{x})\equiv 0$ . Therefore, $\unicode[STIX]{x1D748}=\unicode[STIX]{x1D748}^{(p)}$ , given in (3.6) which, in turn, makes use of expressions (3.10), (3.12) and (3.13). The solution of (3.20b ) is well known. Obtaining exact solutions for (3.20a ) and (3.20c ) is discussed in appendix B. These solutions can be written as (3.21a ) $$\begin{eqnarray}\displaystyle & \displaystyle a(r)=E_{1}\exp [-\unicode[STIX]{x1D706}_{1}(r-1)](1+\unicode[STIX]{x1D706}_{1}r), & \displaystyle\end{eqnarray}$$ (3.21b ) $$\begin{eqnarray}\displaystyle & \displaystyle b(r)=F_{1}\exp [-\unicode[STIX]{x1D706}_{2}(r-1)], & \displaystyle\end{eqnarray}$$ (3.21c ) $$\begin{eqnarray}\displaystyle & \displaystyle c(r)=F_{2}\exp [-\unicode[STIX]{x1D706}_{2}(r-1)](1+\unicode[STIX]{x1D706}_{2}r+\unicode[STIX]{x1D706}_{2}^{2}r^{2}/3), & \displaystyle\end{eqnarray}$$ where only the solution that decays for large $r$ has been retained. Substitution of (3.21b ) and (3.21c ) into the divergence-free condition for the heat flux (3.18) results in (3.22) $$\begin{eqnarray}F_{1}=-{\textstyle \frac{2}{3}}\unicode[STIX]{x1D706}_{2}^{2}F_{2}.\end{eqnarray}$$ Note that one could have also written explicit expressions for $f(r)$ and $g(r)$ from solving (3.20d ) and (3.20e ) in a manner similar to that followed to obtain (3.21), only to find out that, after enforcing $\unicode[STIX]{x1D735}\boldsymbol{\cdot }\unicode[STIX]{x1D748}^{(h)}=0$ , the associated integration constants would be equal to zero, thus yielding $f=g\equiv 0$ . According to the R13 theory, the structure of the Knudsen layer near a boundary is partially determined by the factors $\exp (-\unicode[STIX]{x1D706}_{i}^{2}r)$ in the solution ( $i=1,2,3$ for the present case). The contribution with $\unicode[STIX]{x1D706}_{3}$ vanishes in this case resulting from the vanishing of $\unicode[STIX]{x1D748}^{(h)}$ . The factor containing the exponential with $\unicode[STIX]{x1D706}_{1}$ is present in the pressure, temperature and, through the pressure, in the deviatoric stress. On the other hand, the factor containing the exponential with $\unicode[STIX]{x1D706}_{2}$ occurs in the heat flux and velocity. All these features were already noted by Torrilhon (2010) in the solution for the special case of slow flow past an isothermal sphere. We can now write the final form of the exact solutions for the field variables. The pressure deviation in the gas $p$ can be computed from (3.13), using (3.4) and (3.10). The temperature deviation $\unicode[STIX]{x1D703}$ is found from (2.20), with (3.11) and (3.13). The expression for the gas velocity $\text{u}$ is obtained from (2.19), using (3.12) and (3.14), together with (3.3), (3.10), (3.5) and (3.11). The heat-flux vector $\text{q}$ is computed using (3.14), (3.5) and (3.12), whereas the stress tensor $\unicode[STIX]{x1D748}$ is determined from expressions (3.15) and (3.6), with (3.9), (3.10), (3.12) and (3.13). The temperature deviation in the solid $\unicode[STIX]{x1D703}_{s}$ is given by (3.16). After substitution of (3.17), (3.21) and (3.22), the final expressions for the deviations are (3.23a ) $$\begin{eqnarray}\displaystyle p & = & \displaystyle A_{1}Gr^{-2}\cos \unicode[STIX]{x1D717}+E_{1}Gr^{-2}(\unicode[STIX]{x1D706}_{1}r+1)\exp [-\unicode[STIX]{x1D706}_{1}(r-1)]\cos \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23b ) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D703} & = & \displaystyle {\textstyle \frac{1}{5}}(2A_{1}+5B_{1})Gr^{-2}\cos \unicode[STIX]{x1D717}+{\textstyle \frac{2}{5}}E_{1}Gr^{-2}(\unicode[STIX]{x1D706}_{1}r+1)\exp [-\unicode[STIX]{x1D706}_{1}(r-1)]\cos \unicode[STIX]{x1D717},\qquad\end{eqnarray}$$ (3.23c ) $$\begin{eqnarray}\displaystyle u_{r} & = & \displaystyle Kn^{-1}Gr^{-3}(A_{1}r^{2}-3Kn^{2}B_{1}+2KnC_{2})\cos \unicode[STIX]{x1D717}\nonumber\\ \displaystyle & & \displaystyle -\,{\textstyle \frac{4}{5}}F_{2}Gr^{-3}(\unicode[STIX]{x1D706}_{2}r+1)\exp [-\unicode[STIX]{x1D706}_{2}(r-1)]\cos \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23d ) $$\begin{eqnarray}\displaystyle u_{\unicode[STIX]{x1D717}} & = & \displaystyle -{\textstyle \frac{1}{2}}Kn^{-1}Gr^{-3}(A_{1}r^{2}+3Kn^{2}B_{1}-2KnC_{2})\sin \unicode[STIX]{x1D717}\nonumber\\ \displaystyle & & \displaystyle -\,{\textstyle \frac{2}{5}}F_{2}Gr^{-3}(\unicode[STIX]{x1D706}_{2}^{2}r^{2}+\unicode[STIX]{x1D706}_{2}r+1)\exp [-\unicode[STIX]{x1D706}_{2}(r-1)]\sin \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23e ) $$\begin{eqnarray}\displaystyle q_{r} & = & \displaystyle {\textstyle \frac{15}{2}}KnB_{1}Gr^{-3}\cos \unicode[STIX]{x1D717}+2F_{2}Gr^{-3}(\unicode[STIX]{x1D706}_{2}r+1)\exp [-\unicode[STIX]{x1D706}_{2}(r-1)]\cos \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23f ) $$\begin{eqnarray}\displaystyle q_{\unicode[STIX]{x1D717}} & = & \displaystyle {\textstyle \frac{15}{4}}KnB_{1}Gr^{-3}\sin \unicode[STIX]{x1D717}+F_{2}Gr^{-3}(\unicode[STIX]{x1D706}_{2}^{2}r^{2}+\unicode[STIX]{x1D706}_{2}r+1)\exp [-\unicode[STIX]{x1D706}_{2}(r-1)]\sin \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23g ) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D70E}_{rr} & = & \displaystyle \displaystyle \frac{2}{5}Gr^{-4}(5A_{1}r^{2}-12Kn^{2}A_{1}-30\unicode[STIX]{x1D706}_{3}^{-2}A_{1}+30KnC_{2})\cos \unicode[STIX]{x1D717}+\displaystyle \frac{8}{15}\frac{\unicode[STIX]{x1D706}_{3}^{2}}{\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2}}\nonumber\\ \displaystyle & & \displaystyle \times \,Kn^{2}E_{1}Gr^{-4}(\unicode[STIX]{x1D706}_{1}^{3}r^{3}+4\unicode[STIX]{x1D706}_{1}^{2}r^{2}+9\unicode[STIX]{x1D706}_{1}r+9)\exp [-\unicode[STIX]{x1D706}_{1}(r-1)]\cos \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23h ) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}r} & = & \displaystyle -{\textstyle \frac{6}{5}}Gr^{-4}(2Kn^{2}A_{1}+5\unicode[STIX]{x1D706}_{3}^{-2}A_{1}-5KnC_{2})\sin \unicode[STIX]{x1D717}\nonumber\\ \displaystyle & & \displaystyle +\,\frac{4}{5}\frac{\unicode[STIX]{x1D706}_{3}^{2}}{\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2}}Kn^{2}E_{1}Gr^{-4}(\unicode[STIX]{x1D706}_{1}^{2}r^{2}+3\unicode[STIX]{x1D706}_{1}r+3)\exp [-\unicode[STIX]{x1D706}_{1}(r-1)]\sin \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.23i ) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D703}_{s} & = & \displaystyle D_{1}Gr\cos \unicode[STIX]{x1D717}.\end{eqnarray}$$ From the solution for $\unicode[STIX]{x1D748}$ , we find that $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}\unicode[STIX]{x1D717}}=\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D719}\unicode[STIX]{x1D719}}=-\unicode[STIX]{x1D70E}_{rr}/2$ , a result already obtained by Torrilhon (2010) from an entirely different route. We also find that $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D719}r}=\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}\unicode[STIX]{x1D719}}=0$ , because of the problem symmetry. To compute the higher-order moment tensors $\text{R}$ and $\text{m}$ defined in (2.16a ) and (2.16b ), respectively, we first calculate rank-two and rank-three tensors $\unicode[STIX]{x1D735}\text{q}$ and $\unicode[STIX]{x1D735}\unicode[STIX]{x1D748}$ , respectively. We compute these gradients by means of a popular symbolic algebra and calculus computer package (Wolfram Mathematica), leading to expressions that we do not reproduce here for the sake of space. An alternative path to $\unicode[STIX]{x1D735}\unicode[STIX]{x1D748}$ is to extract components $\unicode[STIX]{x1D70E}_{rr}$ , $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}r}$ and $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}\unicode[STIX]{x1D717}}$ from $\unicode[STIX]{x1D748}$ , compute their gradients and combine them with the dyads and the gradients of the dyads formed by the spherical coordinate unit vectors. Another option is to apply the formulae for the components in spherical coordinates of the gradient of a rank-two tensor field given in Torrilhon (2010). Once $\unicode[STIX]{x1D735}\text{q}$ and $\unicode[STIX]{x1D735}\unicode[STIX]{x1D748}$ has been determined, their corresponding symmetric and trace-free tensors are computed from the formulae in appendix C. The expressions for the components of $\text{R}$ and $\text{m}$ needed in the boundary conditions are (3.24a ) $$\begin{eqnarray}\displaystyle \text{R}_{rr} & = & \displaystyle 108Kn^{2}B_{1}Gr^{-4}\cos \unicode[STIX]{x1D717}+{\textstyle \frac{48}{5}}KnF_{2}Gr^{-4}\nonumber\\ \displaystyle & & \displaystyle \times \exp [-\unicode[STIX]{x1D706}_{2}(r-1)](\unicode[STIX]{x1D706}_{2}^{2}r^{2}+3\unicode[STIX]{x1D706}_{2}r+3)\cos \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.24b ) $$\begin{eqnarray}\displaystyle \text{R}_{\unicode[STIX]{x1D717}r} & = & \displaystyle 54Kn^{2}B_{1}Gr^{-4}\sin \unicode[STIX]{x1D717}+\frac{12}{5}KnF_{2}Gr^{-4}\nonumber\\ \displaystyle & & \displaystyle \times \exp [-\unicode[STIX]{x1D706}_{2}(r-1)](\unicode[STIX]{x1D706}_{2}^{3}r^{3}+3\unicode[STIX]{x1D706}_{2}^{2}r^{2}+6\unicode[STIX]{x1D706}_{2}r+6)\sin \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.24c ) $$\begin{eqnarray}\displaystyle \text{m}_{rrr} & = & \displaystyle -{\textstyle \frac{48}{5}}KnGr^{-5}(10\unicode[STIX]{x1D706}_{3}^{-2}A_{1}+4Kn^{2}A_{1}-r^{2}A_{1}-10KnC_{2})\cos \unicode[STIX]{x1D717}\nonumber\\ \displaystyle & & \displaystyle +\,\frac{16}{25}\frac{\unicode[STIX]{x1D706}_{3}^{2}}{\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2}}Kn^{3}E_{1}Gr^{-5}\exp [-\unicode[STIX]{x1D706}_{1}(r-1)]\nonumber\\ \displaystyle & & \displaystyle \times \,(\unicode[STIX]{x1D706}_{1}^{4}r^{4}+7\unicode[STIX]{x1D706}_{1}^{3}r^{3}+27\unicode[STIX]{x1D706}_{1}^{2}r^{2}+60\unicode[STIX]{x1D706}_{1}r+60)\cos \unicode[STIX]{x1D717},\end{eqnarray}$$ (3.24d ) $$\begin{eqnarray}\displaystyle \text{m}_{\unicode[STIX]{x1D717}rr} & = & \displaystyle -{\textstyle \frac{8}{5}}KnGr^{-5}(30\unicode[STIX]{x1D706}_{3}^{-2}A_{1}+12Kn^{2}A_{1}-r^{2}A_{1}-30KnC_{2})\nonumber\\ \displaystyle & & \displaystyle \times \sin \unicode[STIX]{x1D717}+\frac{32}{25}\frac{\unicode[STIX]{x1D706}_{3}^{2}}{\unicode[STIX]{x1D706}_{1}^{2}-\unicode[STIX]{x1D706}_{3}^{2}}Kn^{3}E_{1}Gr^{-5}\nonumber\\ \displaystyle & & \displaystyle \times \exp [-\unicode[STIX]{x1D706}_{1}(r-1)](\unicode[STIX]{x1D706}_{1}^{3}r^{3}+6\unicode[STIX]{x1D706}_{1}^{2}r^{2}+15\unicode[STIX]{x1D706}_{1}r+15)\sin \unicode[STIX]{x1D717}.\end{eqnarray}$$ Torrilhon (2010) also gives formulae for components $\text{R}_{rr}$ , $\text{R}_{\unicode[STIX]{x1D717}r}$ , $\text{m}_{rrr}$ and $\text{m}_{\unicode[STIX]{x1D717}rr}$ directly in terms of derivatives of the components of the heat flux and stress, which can be used to derive expressions (3.24). Regarding the boundary conditions, from the solutions we also find that $\text{R}_{\unicode[STIX]{x1D717}\unicode[STIX]{x1D717}}=\text{R}_{\unicode[STIX]{x1D719}\unicode[STIX]{x1D719}}$ , and $\text{m}_{\unicode[STIX]{x1D717}\unicode[STIX]{x1D717}r}=\text{m}_{\unicode[STIX]{x1D719}\unicode[STIX]{x1D719}r}$ . Because of this, and referring to the discussion at the end of § 2.1, we have that by enforcing (2.11), constraint (2.12) can be dropped entirely. Moreover, equations (2.9) and (2.13) provide constraints for $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}r}$ and $\text{R}_{\unicode[STIX]{x1D717}r}$ whereas they are trivially satisfied for $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D719}r}$ and $\text{R}_{\unicode[STIX]{x1D719}r}$ . Therefore, including (2.10), (2.14) and (2.15), in total, we are left with six scalar boundary conditions. Substitution of the solutions for the various fields into these boundary conditions leads to a system of six linear algebraic equations for the six unknowns $A_{1}$ , $B_{1}$ , $C_{2}$ , $D_{1}$ , $E_{1}$ and $F_{2}$ . For the problem of a sphere exposed to a uniform temperature gradient in the far field, this system may be written as (3.25a ) $$\begin{eqnarray}\displaystyle & & \displaystyle \hspace{-10.0pt}-18Kn(256\sqrt{2}Kn^{4}+64\sqrt{\unicode[STIX]{x03C0}}Kn^{3}-8\sqrt{2}Kn^{2}+5\sqrt{2})A_{1}-135\sqrt{2}Kn^{3}B_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,180Kn^{2}(24\sqrt{2}Kn^{2}+6\sqrt{\unicode[STIX]{x03C0}}Kn+\sqrt{2})C_{2}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad -\,18Kn^{2} [\!216\sqrt{2}Kn^{3}+18(4\sqrt{15}+3\sqrt{\unicode[STIX]{x03C0}})Kn^{2}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,9\sqrt{2}(\sqrt{15\unicode[STIX]{x03C0}}+8)Kn+4\sqrt{15}+15\sqrt{\unicode[STIX]{x03C0}}\!]E_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad -\,4\sqrt{2}(9Kn^{2}+3\sqrt{5}Kn+5)F_{2}=-135\sqrt{2}Kn^{3},\end{eqnarray}$$ (3.25b ) $$\begin{eqnarray}\displaystyle & & \displaystyle \hspace{-10.0pt}-84\sqrt{2}Kn(32Kn^{2}-9)A_{1}+30Kn(270\sqrt{2}Kn^{2}+105\sqrt{\unicode[STIX]{x03C0}}Kn+28\sqrt{2})B_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,2520\sqrt{2}Kn^{2}C_{2}-840\sqrt{2}KnD_{1}-42(54\sqrt{2}Kn^{3}+18\sqrt{15}Kn^{2}+12\sqrt{2}Kn-\sqrt{15})E_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,40[54\sqrt{2}Kn^{2}+3(6\sqrt{10}+7\sqrt{\unicode[STIX]{x03C0}})Kn+10\sqrt{2}+7\sqrt{5\unicode[STIX]{x03C0}}]F_{2}=1575\sqrt{\unicode[STIX]{x03C0}}Kn^{2},\end{eqnarray}$$ (3.25c ) $$\begin{eqnarray}\displaystyle & & \displaystyle \hspace{-10.0pt}-42Kn(1280\sqrt{\unicode[STIX]{x03C0}}Kn^{3}+224\sqrt{2}Kn^{2}-120\sqrt{\unicode[STIX]{x03C0}}Kn-33\sqrt{2})A_{1}+30\sqrt{2}Kn(135Kn^{2}-7)B_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +1260Kn^{2}(40\sqrt{\unicode[STIX]{x03C0}}Kn+7\sqrt{2})C_{2}+210\sqrt{2}KnD_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad -\,21[\!2160\sqrt{\unicode[STIX]{x03C0}}Kn^{4}+18\sqrt{2}(20\sqrt{15\unicode[STIX]{x03C0}}+21)Kn^{3}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,18(7\sqrt{15}+45\sqrt{\unicode[STIX]{x03C0}})Kn^{2}+\sqrt{2}(35\sqrt{15\unicode[STIX]{x03C0}}+144)Kn+13\sqrt{15}+25\sqrt{\unicode[STIX]{x03C0}}\!]E_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,40\sqrt{2}(27Kn^{2}+9\sqrt{5}Kn+5)F_{2}=0,\end{eqnarray}$$ (3.25d ) $$\begin{eqnarray}\displaystyle & & \displaystyle -18\sqrt{2}Kn(256Kn^{4}-8Kn^{2}-5)A_{1}+135Kn^{3}(72\sqrt{\unicode[STIX]{x03C0}}Kn+13\sqrt{2})B_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,180\sqrt{2}Kn^{2}(24Kn^{2}-1)C_{2}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad -\,72Kn^{2}(54\sqrt{2}Kn^{3}+18\sqrt{15}Kn^{2}+18\sqrt{2}Kn+\sqrt{15})E_{1}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,4[\!648\sqrt{\unicode[STIX]{x03C0}}Kn^{3}+9(13\sqrt{2}+24\sqrt{5\unicode[STIX]{x03C0}})Kn^{2}\nonumber\\ \displaystyle & & \displaystyle \hspace{-10.0pt}\quad +\,3(13\sqrt{10}+60\sqrt{\unicode[STIX]{x03C0}})Kn+65\sqrt{2}+20\sqrt{5\unicode[STIX]{x03C0}}\!]F_{2}=-1485\sqrt{2}Kn^{3},\end{eqnarray}$$ (3.25e ) $$\begin{eqnarray}\displaystyle & & \displaystyle \hspace{-10.0pt}A_{1}-3Kn^{2}B_{1}+2KnC_{2}-{\textstyle \frac{4}{15}}(3Kn+\sqrt{5})F_{2}=0,\end{eqnarray}$$ (3.25f ) $$\begin{eqnarray}\displaystyle & & \displaystyle \hspace{-10.0pt}{\textstyle \frac{15}{2}}Kn^{2}B_{1}+{\textstyle \frac{15}{4}}\unicode[STIX]{x1D6EC}Kn^{2}D_{1}+{\textstyle \frac{2}{3}}(3Kn+\sqrt{5})F_{2}=-{\textstyle \frac{15}{4}}(\unicode[STIX]{x1D6EC}-1)Kn^{2}.\end{eqnarray}$$ For the velocity problem, the left-hand sides of expressions (3.25) remain the same, whereas the right-hand sides for (3.25a ), (3.25b ), (3.25d ), (3.25e ) and (3.25f ) become, respectively, $180\sqrt{2}Kn^{2}$ , 0, $-180\sqrt{2}Kn^{2}$ , $-Kn$ and 0. The solution of this linear system of equations is found by means of a computer symbolic algebra package. The expressions for the coefficients are exceedingly large and are not reproduced here. They depend on the parameters $Kn$ and $\unicode[STIX]{x1D6EC}$ . Once these coefficients are determined, the field variables can be computed at any position $(r,\unicode[STIX]{x1D717})$ , for given values of parameters $Kn$ , $\unicode[STIX]{x1D6EC}$ and $Ep$ (or $Ma$ ), using (3.23). In general terms, the main attributes of the solution method proposed here are twofold. First, it replaces an a priori knowledge of the exact dependency of the various scalar fields, including spherical-coordinate components of vectors and tensors, on the polar angle – e.g. from the classical solution of Stokes flow past a sphere – with the rather more general constraint of a dependency of the various tensor fields (scalar, vectors and rank-two tensors) on the external perturbation vector $\text{G}$ multiplied by a few multipole potentials appropriately chosen. Second, instead of dealing with the components of the Laplacian of the stress tensor in spherical coordinates and with the scalar differential equations carrying them, we solve a second-order differential equation for the stress in compact, tensor form. This section contains three parts. First, we discuss predictions from our solution of the R13 moment equations for the problem of thermophoresis of a spherical particle presented in § 2.1. In particular, we show flow and temperature profiles for the sphere's surroundings contrasted with results from kinetic theory and then compare predictions for the thermophoretic force from our solution to those from other theories as well as recently published experimental measurements. Second, we present results from R13 and other theories for the problem described in § 2.2, that is, the drag force due to a uniform flow past a sphere taking into account the particle-to-gas heat conductivity ratio. In both sub-sections, we present profiles of velocity components, density and temperature in a neighbourhood of the spherical surface. Third, the solutions from these two problems are coupled in the balance of the thermophoretic force by the drag acting on the sphere when this is free to move giving rise to the sphere's thermophoretic terminal velocity. We show predictions from R13 and other theories for this quantity. Note that the results presented here for the temperature deviations in the gas and solid, $\unicode[STIX]{x1D703}$ and $\unicode[STIX]{x1D703}_{s}$ and the gas heat flux, $\text{q}$ , correspond to the variables on the left-hand sides of expressions in (2.8). These are obtained after transforming back from the fields given in (3.23), which represent the variables with ' $\check{\,}$ '. Examining the response of some of the variables modelled with the R13 equations to changes in the Knudsen number $Kn$ and solid-to-gas conductivity ratio $\unicode[STIX]{x1D6EC}$ near the surface of the sphere is of interest. In figure 2, we show the profiles of the velocity components, density and temperature deviations as functions of the radial coordinate starting at the surface of the sphere, $r=1$ , obtained with the R13 exact solution derived here, the numerical solutions of the linearized Boltzmann equation by Sone (2007) for a hard-sphere gas, and his asymptotic expression for small Knudsen number. Curves are presented for an isothermal sphere, i.e. $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , and $Kn=0$ , 0.090, 0.180, 0.269 and 0.539, corresponding to Sone's $k=0$ , 0.1, 0.2, 0.3 and 0.6, respectively – the relationship between $Kn$ and $k$ is given in the figure's caption. Predictions from the theories depict typical rarefaction effects, namely, velocity slip (figure 2 b), and temperature jump (figure 2 d) at the sphere's surface. When $k\rightarrow 0$ , and $k=0.1$ , R13 agrees well with the results from kinetic theory; however, as $k$ increases the differences between the two models becomes noticeable, suggesting that R13's quantitative description of the Knudsen layer near the surface of the sphere begins to become incomplete for $Kn\gtrsim 0.1$ . Contour plots of gas speed (normalized by $Ep$ ) and velocity streamlines are presented in figure 3 for combinations of $Kn=0.02$ and 0.2 and $\unicode[STIX]{x1D6EC}=4$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . The temperature gradient points from left to right. Note that except for the case of $Kn=0.02$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , the gas flow near the sphere is in the direction of the temperature gradient. Following the reasoning by Sone (2007), this flow is driven by the force exerted by the solid surface on the gas in this direction. This force is the reaction to the momentum transferred onto the sphere's surface by the gas in the opposite direction, that is, from the hot to the cold region. This corresponds to a scenario of normal or 'positive' thermophoresis. For the combination $Kn=0.02$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , on the other hand, the situation is inverted, and the gas flow is from hot to cold. This case is thus known as reversed or 'negative' thermophoresis. We shall discuss this phenomenon below with reference to the macroscopic transport equations of the previous section and, in particular, the boundary condition for slip. This is followed by a quantitative investigation of the net thermophoretic force acting on the sphere with special attention to its change in direction. Another feature shown in figure 3 is that for $Kn=0.02$ , the maximum speed in the case $\unicode[STIX]{x1D6EC}\rightarrow \infty$ is approximately one order of magnitude smaller than in the case $\unicode[STIX]{x1D6EC}=4$ , and nearly two orders of magnitude smaller than the other two cases corresponding to $Kn=0.2$ . One may thus expect that, among the four cases considered, the smallest net force on the sphere should be attained in the case $Kn=0.02$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . Contour plots of gas temperature deviation (normalized by $Ep$ ) with streamlines for the heat-flux vector in the gas and within the solid sphere are shown in figure 4 for four cases with $Kn=0.02$ and 0.2 and $\unicode[STIX]{x1D6EC}=4$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . The heat-flux vectors point to the left, opposite to the temperature gradient. In the cases where $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , figures show no temperature deviation inside the sphere. In contrast, for $\unicode[STIX]{x1D6EC}=4$ , a non-zero temperature gradient is observed in the spherical particle, with the temperature varying linearly with the axial coordinate. In the two cases with $\unicode[STIX]{x1D6EC}=4$ and also in the case where $\unicode[STIX]{x1D6EC}\rightarrow \infty$ and $Kn=0.2$ , temperature jumps across the spherical surface are evident. Enhancing the gas rarefaction by increasing $Kn$ for fixed $\unicode[STIX]{x1D6EC}$ increases the temperature jump across the spherical surface. For instance, with $\unicode[STIX]{x1D6EC}=4$ , there are more contour lines inside the sphere for $Kn=0.02$ than for 0.2, pointing toward a smoother temperature gradient in the latter, whereas the opposite takes place in the gas, as the isothermal lines are more bent for $Kn=0.02$ than for 0.2. This yields a greater temperature jump at a given point of the sphere's surface ( $z\neq 0$ ) in the case with higher $Kn$ . Another notable feature in figure 4 is that lines of constant temperature intercept the surface of the sphere at various points in the two cases where $\unicode[STIX]{x1D6EC}=4$ as well as for $Kn=0.2$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , indicating the presence of a temperature gradient in the tangential direction. Under rarefied conditions, it is known that this temperature gradient induces gas motion along its direction (i.e. from cold to hot) near the solid surface, an effect known as thermal creep or thermal transpiration (Maxwell 1879; Kennard 1938; Sone 2007; Mohammadzadeh et al. 2015). This is the type of flow depicted by the streamlines in the corresponding plots of figure 3. On the other hand, for $Kn=0.02$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , the isothermal lines in the gas tend to wrap the sphere's surface resulting in temperature gradients that essentially vanish in the tangential direction on the gas side of the sphere's surface. This points toward a hindering of the thermal creep in comparison with the other cases, in accord with the small velocity magnitudes and, more importantly, the reversal in the direction of the streamlines shown for the same case in figure 3. In fact, this reversed flow, now from the hot to the cold region, occurs because another type of flow, the so-called thermal-stress slip flow (Sone 2007; Young 2011), becomes dominant over the thermal creep. The thermal-stress slip flow represents a Knudsen number higher-order effect and, even though it is also induced by changes in the gas temperature distribution, is of a different nature from the thermal creep. From his asymptotic analysis of the linearized Boltzmann equation for a gas that deviates slightly from a state of uniform equilibrium at rest, Sone (2007) identified that the slip flow in the slip boundary condition was determined by the term proportional to $-\text{t}\boldsymbol{\cdot }\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}\unicode[STIX]{x1D703}\boldsymbol{\cdot }\text{n}$ multiplied by a positive constant when the boundary surface has a uniform temperature or in the absence of thermal creep (here $\text{t}$ and $\text{n}$ are unit vectors tangential and normal to the bounding surface, respectively). Because $\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}\unicode[STIX]{x1D703}$ multiplied by a constant is one of the terms in his expression for the stress tensor, he designated this flow as thermal-stress slip flow. As discussed by Sone (2007), if the boundary temperature is constant then $-\text{t}\boldsymbol{\cdot }\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}\unicode[STIX]{x1D703}\boldsymbol{\cdot }\text{n}=-\text{t}\boldsymbol{\cdot }\unicode[STIX]{x1D735}(\text{n}\boldsymbol{\cdot }\unicode[STIX]{x1D735}\unicode[STIX]{x1D703})=-\unicode[STIX]{x2202}(\unicode[STIX]{x2202}\unicode[STIX]{x1D703}/\unicode[STIX]{x2202}n)/\unicode[STIX]{x2202}s$ , where $s$ is the arc length. Then, if the isothermal surfaces in the gas are not parallel to the solid boundary (also of constant temperature), i.e. the component of the temperature gradient normal to the boundary ( $\unicode[STIX]{x2202}\unicode[STIX]{x1D703}/\unicode[STIX]{x2202}n$ ) changes along it, when the boundary temperature is higher (lower) than that in the gas, a flow is promoted in the direction in which the isothermal surfaces converge (diverge). This is exemplified by the contour plots in figures 3 and 4 for $Kn=0.02$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , and figure 5 sketches this behaviour for clarity. The thermal-stress slip flow is the primary cause of a negative thermophoretic force. Recalling the generalized slip boundary condition (2.9), rewritten here for convenience in a slightly different form (4.1) $$\begin{eqnarray}u_{\unicode[STIX]{x1D717}}-u_{s,\unicode[STIX]{x1D717}}=-\frac{1}{5}q_{\unicode[STIX]{x1D717}}-\left(\frac{\unicode[STIX]{x03C0}}{2}\right)^{1/2}\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}r}-\frac{1}{2}\text{m}_{\unicode[STIX]{x1D717}rr},\end{eqnarray}$$ setting $u_{s,\unicode[STIX]{x1D717}}=0$ and using the expressions from the analytical solution obtained with the R13-moment equations, we investigate the contribution of each of the terms on the right-hand side of (4.1) to the gas slip velocity $u_{\unicode[STIX]{x1D717}}$ on the surface of the sphere, $r=1$ , and selected $\unicode[STIX]{x1D717}$ values. Considering for this exercise the conditions of figures 3 and 4, we have that for the cases of $\unicode[STIX]{x1D6EC}=4$ and also for $\unicode[STIX]{x1D6EC}\rightarrow \infty$ and $Kn=0.2$ , where thermal creep is predominant, the greater contribution in absolute value is from the first term on the right-hand side, giving a negative value (flow from cold to hot), whereas for the remaining case of $\unicode[STIX]{x1D6EC}\rightarrow \infty$ and $Kn=0.02$ , exhibiting thermal-stress slip flow, the larger value is positive and results from the shear-stress term $-(\unicode[STIX]{x03C0}/2)^{1/2}\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}r}$ . In all cases, the last term in (4.1), a Knudsen number higher-order term, results in a significantly smaller absolute value in comparison. Using now the balance equation for the heat flux (2.6b ) for $q_{\unicode[STIX]{x1D717}}$ when the flow has the direction of the temperature gradient or the balance equation for the deviatoric stress (2.6a ) for $\unicode[STIX]{x1D70E}_{\unicode[STIX]{x1D717}r}$ when the flow has the opposite direction – after moving the terms with derivatives of $\text{q}$ and $\unicode[STIX]{x1D748}$ , of higher order in $Kn$ , to the right-hand side – we found that in the former, the term $(3/4)Kn\,\unicode[STIX]{x2202}\unicode[STIX]{x1D703}/\unicode[STIX]{x2202}\unicode[STIX]{x1D717}$ carries the largest weight whereas in the latter, the term proportional to $Kn(\overline{\unicode[STIX]{x1D735}\text{ q}})_{\unicode[STIX]{x1D717}r}$ is predominant. Note that this term and not the shear stress from classical hydrodynamics, $-2Kn(\overline{\unicode[STIX]{x1D735}\text{ u}})_{\unicode[STIX]{x1D717}r}$ , is the leading contributor in this case. Examination of the term proportional to $Kn(\overline{\unicode[STIX]{x1D735}\text{ q}})_{\unicode[STIX]{x1D717}r}$ by substitution of (2.6b ) for $\text{q}$ when the slip flow is from hot to cold revealed that, as expected, the major role in this situation is played by the term $-3(\unicode[STIX]{x03C0}/2)^{1/2}Kn^{2}(\unicode[STIX]{x1D735}\unicode[STIX]{x1D735}\unicode[STIX]{x1D703})_{\unicode[STIX]{x1D717}r}$ ( ${\approx}-3(\unicode[STIX]{x03C0}/2)^{1/2}Kn^{2}\unicode[STIX]{x2202}^{2}\unicode[STIX]{x1D703}/\unicode[STIX]{x2202}r\unicode[STIX]{x2202}\unicode[STIX]{x1D717}$ when $\unicode[STIX]{x2202}\unicode[STIX]{x1D703}/\unicode[STIX]{x2202}\unicode[STIX]{x1D717}\approx 0$ ) in agreement with the argument of Sone (2007) for the thermal-stress slip flow and the illustration in figure 5. From a microscopic perspective, Sone (2007) emphasizes that for either thermal creep or thermal-stress slip flow, the gas motion is induced by the difference in the velocity distribution functions of the molecules colliding with the solid boundary and those leaving it. The different nature of the flow and force between thermal creep and thermal-stress slip flow is affected by the fact that, before impinging onto the solid boundary, particles starting within a mean free path essentially keep the attributes of their origins, and the temperature distribution in the gas surrounding the boundary is notably different in one case or the other. The thermophoretic force acting on the sphere is obtained by integrating over the surface of the sphere the projection of the total stress vector at $r^{\ast }=a^{\ast }$ onto direction $\text{k}$ , i.e. $(-p^{\ast }\hat{\text{r}}-\unicode[STIX]{x1D748}^{\ast }\boldsymbol{\cdot }\hat{\text{r}})\boldsymbol{\cdot }\text{k}$ . Young (2011) introduced the dimensionless thermophoretic force (4.2) $$\begin{eqnarray}\unicode[STIX]{x1D6F7}=\frac{1}{Ep\,Kn}\frac{F_{T}^{\ast }}{\unicode[STIX]{x1D707}_{0}^{\ast }\,\unicode[STIX]{x1D703}_{0}^{\ast \,1/2}a^{\ast }},\end{eqnarray}$$ where $F_{T}^{\ast }$ denotes the dimensional thermophoretic force. From our solution of the R13 equations for the pressure and the deviatoric stress, we obtain the exact expression (4.3) $$\begin{eqnarray}\unicode[STIX]{x1D6F7}=-12\unicode[STIX]{x03C0}\,\frac{\mathop{\sum }_{m=0}^{7}(\unicode[STIX]{x1D6FC}_{m}^{(0)}+\unicode[STIX]{x1D6FC}_{m}^{(1)}\unicode[STIX]{x1D6EC})Kn^{\,m}}{\mathop{\sum }_{m=0}^{9}(\unicode[STIX]{x1D6FD}_{m}^{(0)}+\unicode[STIX]{x1D6FD}_{m}^{(1)}\unicode[STIX]{x1D6EC})Kn^{\,m}},\end{eqnarray}$$ and coefficients $\unicode[STIX]{x1D6FC}_{m}^{(0)}$ , $\unicode[STIX]{x1D6FC}_{m}^{(1)}$ , $\unicode[STIX]{x1D6FD}_{m}^{(0)}$ and $\unicode[STIX]{x1D6FD}_{m}^{(1)}$ are given in table 1. For the particular case of an isothermal sphere, we simply compute the limit $\unicode[STIX]{x1D6EC}\rightarrow \infty$ in this expression, resulting in $\unicode[STIX]{x1D6F7}=-12\unicode[STIX]{x03C0}\sum \unicode[STIX]{x1D6FC}_{m}^{(1)}Kn^{m}/\sum \unicode[STIX]{x1D6FD}_{m}^{(1)}Kn^{m}$ , with index $m$ spanning the same ranges as in (4.3). Results from (4.3) are plotted as $-\unicode[STIX]{x1D6F7}/(2\unicode[STIX]{x03C0})$ versus $(\unicode[STIX]{x03C0}/2)^{1/2}Kn$ for $\unicode[STIX]{x1D6EC}=4$ , 10, and $22.4\times 10^{3}$ in figure 6. The highest $\unicode[STIX]{x1D6EC}$ value is chosen motivated by one of the experimental data sets depicted in the figure (see below). We also include predictions from various models based on the linearized Boltzmann equation, namely, by Sone & Aoki (1983) for an isothermal sphere using the BGK equation – denoted in their work as the Boltzmann–Krook–Welander (BKW) equation – by Beresnev & Chernyak (1995) using the S model, and by Sone (2007) for a hard-sphere gas. Results from G13 and from its modification represented by Young's (2011) interpolation formula are also added. In addition, values from Waldmann (1959) formula, valid for large $Kn$ and insensitive to changes in $\unicode[STIX]{x1D6EC}$ , are presented. This formula as well as the expressions for the dimensionless thermophoretic force from G13 and Young (2011) are presented, for completeness, in appendix D. In figure 6 we included the experimental results on thermophoresis recently presented by Bosworth & Ketsdever (2016) for spheres made of the polymer acrylonitrile butadiene styrene (ABS) and by Bosworth et al. (2016) for spheres of copper. In the latter, data include observations of reversed or negative thermophoresis. In the experiments, they set the sphere in the mid-plane between two copper plates and placed the entire assembly in a vacuum chamber filled with argon. Rarefied conditions were attained by reducing the pressure to values significantly below the atmospheric pressure. The force was generated by keeping one plate at the ambient temperature while the other plate was heated uniformly to a higher temperature. To plot the data in figure 6 – originally presented as dimensional force versus pressure – we used an ambient temperature of 24.5 $^{\circ }$ C (Bosworth, R. & Ketsdever, A. 2017 Private communication.) and a temperature difference between plates of $35~~\text{K}~\text{m}^{-1}$ , a distance between plates of 0.40 m, a particle radius of 0.0254 m, a gas specific constant of $208~\text{J}~\text{kg}^{-1}\,\text{K}^{-1}$ and a gas viscosity of $2.295\times 10^{-5}~\text{Pa}~\text{s}$ at the mid-plane temperature (NIST 2017). The solid-to-gas heat conductivity ratios were estimated to be $\unicode[STIX]{x1D6EC}=10$ and $22.4\times 10^{3}$ for the ABS and copper spheres, respectively. In practice, in the copper case, such a large value would correspond to an isothermal sphere ( $\unicode[STIX]{x1D6EC}\rightarrow \infty$ ). To account for Knudsen number effects in the temperature gradient by the mid-plane between the plates, we adopted the model in chapter 4 of Sone (2007) based on the linearized Boltzmann equation for a hard-sphere gas. These gas rarefaction effects include a lower temperature gradient in the mid-plane with respect to the nominal value, gas–wall temperature jumps at the plates' surfaces and noticeable deviations from linearity in the temperature profile in regions near the walls (Knudsen layers). These effects are evident, for instance, in figure 5 in Bosworth et al. (2016), obtained using DSMC. This corrected value of the temperature gradient was used to compute the Epstein number $Ep$ for each experimental point. We obtain values of $Ep$ in the order of $3.0\times 10^{-3}$ for the experimental data sets. For $\sqrt{\unicode[STIX]{x03C0}/2}\,Kn\lesssim 0.1$ , figure 6 shows that the thermophoretic force from R13 tends very closely to the results from models of the linearized Boltzmann equation. In contrast, G13 significantly under-predicts these results. As demonstrated in the figure, this discrepancy is corrected in Young's (2011) interpolation formula, obtained from G13 after fitting the thermal creep, velocity slip and temperature jump coefficients from the so-called Maxwell–Smoluchowski values (e.g. see Kennard 1938; Nguyen & Wereley 2002; Gu & Emerson 2007; Young 2011) with those resulting from solutions of model Boltzmann equations compiled by Sharipov (2004). Young argues that changing these coefficients in the boundary conditions of G13 is needed to compensate for the inability of this model to reproduce the Knudsen layer. On the other hand, our analytical result from R13 gives the correct prediction without changing the original numerical factors appearing in the various terms of both the boundary conditions and bulk equations, i.e. with no fitting. Whereas the new set of coefficients adopted by Young (2011) for G13 works well for the thermophoretic force on a sphere for very small $Kn$ , they might not be applicable to situations other than for which they were fitted (e.g. for different geometries). When $Kn\rightarrow 0$ we have from (4.3) for R13 that (4.4) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D6F7} & \!=\! & \displaystyle -12\unicode[STIX]{x03C0}\!\left[\frac{1.13776419}{\unicode[STIX]{x1D6EC}+2}-1.03507976\frac{(\unicode[STIX]{x1D6EC}^{2}+8.34774566\unicode[STIX]{x1D6EC}+6.22407475\times 10^{-1})}{(\unicode[STIX]{x1D6EC}+2)^{2}}Kn\right]\nonumber\\ \displaystyle & & \displaystyle +\,O(Kn^{2}),\end{eqnarray}$$ whereas in the special case $\unicode[STIX]{x1D6EC}\rightarrow \infty$ (isothermal sphere), we find the following asymptotic result when $Kn\rightarrow 0$ , (4.5) $$\begin{eqnarray}\displaystyle \unicode[STIX]{x1D6F7} & = & \displaystyle 39.0215875\,Kn-1043.54657\,Kn^{2}+O(Kn^{3}).\end{eqnarray}$$ Note from these expressions that for very small $Kn$ and finite $\unicode[STIX]{x1D6EC}$ , $-\unicode[STIX]{x1D6F7}$ tends to a positive constant, indicating positive or normal thermophoresis. On the other hand, after passing the limit $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , the asymptotic result shows that $-\unicode[STIX]{x1D6F7}$ tends to zero linearly with $Kn$ through negative values corresponding to slightly reversed thermophoresis. A similar trend can be observed in the asymptotic formulae by Sone (2007). Within the interval $0.02\lesssim \sqrt{\unicode[STIX]{x03C0}/2}\,Kn\lesssim 0.2$ , figure 6(c) shows that the experimental data for the copper sphere exhibits negative or reversed thermophoresis, $-\unicode[STIX]{x1D6F7}<0$ . The reversed net force direction, such that it is now from the cold to the hot region for large values of $\unicode[STIX]{x1D6EC}$ , is mainly caused by the thermal-stress flow near the sphere's surface. In this figure, R13, G13, Young's formula and the models by Sone & Aoki (1983) and Beresnev & Chernyak (1995) predict negative thermophoresis. In particular, the contour plots for $Kn=0.02$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ in figures 3 and 4 show the various features that characterized a scenario of negative thermophoresis, such as the reversed flow direction and the absence of temperature changes in the gas side along the solid surface, in comparison to the other cases considered. Nonetheless, whilst R13 agrees well with the benchmark Boltzmann solutions (for hard spheres and the S-model), both predict force magnitudes significantly smaller than the experimental values; a discrepancy which at present we cannot explain. Referring again to the values of $Kn$ and $\unicode[STIX]{x1D6EC}$ considered in figure 3, the force magnitude in the case $Kn=0.02$ and the largest $\unicode[STIX]{x1D6EC}$ , according to figure 6, is notably smaller than for the other cases, which seems to correspond with the fact that the velocity magnitude in this case is also one or more orders of magnitude smaller than in the others. Expressions of thermophoretic force for R13 and G13, as well as Young's interpolation formula exhibit a critical $\unicode[STIX]{x1D6EC}$ above which reversed thermophoresis occurs for some bounded $Kn$ interval. For R13, this threshold is $\unicode[STIX]{x1D6EC}=87.17$ , whereas for G13 it is a much lower ratio, perhaps less likely to actually occur, of 10.93. In the case of Young's interpolation formula the threshold is $\unicode[STIX]{x1D6EC}=26.05$ . For larger, but finite values of $\unicode[STIX]{x1D6EC}$ , the corresponding curve of $-\unicode[STIX]{x1D6F7}/(2\unicode[STIX]{x03C0})$ will intercept the $\unicode[STIX]{x1D6F7}=0$ axis at two points, and in both $Kn>0$ . The maximum negative value of $-\unicode[STIX]{x1D6F7}/(2\unicode[STIX]{x03C0})$ occurs when $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . From the R13 model, this is $-\unicode[STIX]{x1D6F7}/(2\unicode[STIX]{x03C0})=-0.068$ at $Kn=0.024$ ; for G13 it is $-\unicode[STIX]{x1D6F7}/(2\unicode[STIX]{x03C0})=-0.321$ at $Kn=0.087$ , and for Young's formula it is $-\unicode[STIX]{x1D6F7}/(2\unicode[STIX]{x03C0})=-0.180$ at $Kn=0.049$ . Negative thermophoresis cannot occur for $Kn\geqslant 0.054$ for R13, $Kn\geqslant 0.251$ for G13 and $Kn\geqslant 0.130$ for Young's formula; these critical values are attained with $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . This upper limit of $Kn$ will decrease for $\unicode[STIX]{x1D6EC}<\infty$ . The critical magnitudes of Knudsen numbers listed in this paragraph were instrumental in choosing $Kn=0.02$ and 0.2 for the contour plots in figures 3 and 4. For $0.2\lesssim \sqrt{\unicode[STIX]{x03C0}/2}\,Kn\lesssim 1$ , R13 shows qualitatively the best performance among the models derived with the moments method for both the copper and polymer (ABS) data. In particular, for $\sqrt{\unicode[STIX]{x03C0}/2}\,Kn~\lesssim ~0.4$ , its predictions agree with the experimental points for the copper's sphere. It is likely coincidental that R13 performs better with respect to the experimental data than the benchmark models of Beresnev & Chernyak (1995) and Sone (2007) from kinetic theory. G13 depicts a rather unsatisfactory performance, as its maximum is significantly shifted to higher $Kn$ values. When $Kn>1$ , Waldmann's formula matches the experiments, as expected. For larger $Kn$ values, the various model Boltzmann equations approximate well the experiments. G13 significantly over-predicts these values, although it tends to zero in the same asymptotic manner as Waldmann's formula, that is, as $Kn^{-1}$ when $Kn\rightarrow \infty$ (see appendix D), but evidently with an incorrect coefficient. By construction, Young's formula matches Waldmann's for large $Kn$ . On the other hand, for R13, as $Kn\rightarrow \infty$ , we obtain from (4.3) that (4.6) $$\begin{eqnarray}\unicode[STIX]{x1D6F7}=-3.80431806Kn^{-2}-\frac{5.46577139\,\unicode[STIX]{x1D6EC}-5.32067322\times 10^{-1}}{\unicode[STIX]{x1D6EC}\,Kn^{3}}+O(Kn^{-4}),\end{eqnarray}$$ and thus goes to zero faster than Waldmann's. For large $Kn$ ( $Kn>1$ , say), heat conductivity ratio $\unicode[STIX]{x1D6EC}$ plays a fairly inconsequential role. In this regime, according to figure 6, R13 under-predicts both the experimental force as well as Waldmann's results. It is known that for such large values of $Kn$ , models such as G13 and R13 are not expected to give the correct quantitative result (Torrilhon 2010). Profiles of the gas velocity components, density and temperature deviations as functions of the radial coordinate computed with the R13 exact solution of the previous section are presented in figure 7 for the case of a uniform flow past a stationary sphere with a thermal conductivity ratio $\unicode[STIX]{x1D6EC}\rightarrow \infty$ (isothermal sphere). For $Kn=0.05$ and 0.3, our solution is validated as its predictions match very well the exact solution of Torrilhon (2010) obtained from his most recent Wolfram Mathematica code (see § 5.3 in Claydon et al. 2017). Good agreement is also observed, in general, between our solution and results from the asymptotic expressions of Sone (2007) when $Kn\rightarrow 0$ ( $k\rightarrow 0$ ) and his numerical solution of a kinetic theory model for $Kn=0.090$ ( $k=0.1$ ). The drag force acting on the sphere is computed from the same surface integral mentioned in the previous sub-section but using the pressure and deviatoric stress fields obtained from the solution of the problem of slow flow past a sphere. In dimensionless form, this drag force from R13 may be written as (4.7) $$\begin{eqnarray}\frac{F_{D}^{\ast }}{F_{\text{Stokes}}^{\ast }}=\frac{2+\unicode[STIX]{x1D6EC}+\mathop{\sum }_{m=1}^{7}(\unicode[STIX]{x1D6FC}_{m}^{(0)}+\unicode[STIX]{x1D6FC}_{m}^{(1)}\unicode[STIX]{x1D6EC})Kn^{m}}{2+\unicode[STIX]{x1D6EC}+\mathop{\sum }_{m=1}^{9}(\unicode[STIX]{x1D6FD}_{m}^{(0)}+\unicode[STIX]{x1D6FD}_{m}^{(1)}\unicode[STIX]{x1D6EC})Kn^{m}},\end{eqnarray}$$ where $F_{\text{Stokes}}^{\ast }$ denotes Stokes formula for drag on a sphere, $F_{\text{Stokes}}^{\ast }=6\unicode[STIX]{x03C0}\unicode[STIX]{x1D707}_{0}^{\ast }U_{0}^{\ast }a^{\ast }$ and coefficients $\unicode[STIX]{x1D6FC}_{m}^{(0)}$ , $\unicode[STIX]{x1D6FC}_{m}^{(1)}$ , $\unicode[STIX]{x1D6FD}_{m}^{(0)}$ and $\unicode[STIX]{x1D6FD}_{m}^{(1)}$ are given in Table 2. For the special case of the drag on an isothermal sphere due to a streaming flow, expression (4.7), in the limit of $\unicode[STIX]{x1D6EC}\rightarrow \infty$ , reduces to $F_{D}^{\ast }/F_{\text{Stokes}}^{\ast }=(1+\sum \unicode[STIX]{x1D6FC}_{m}^{(1)}Kn^{\,m})/(1+\sum \unicode[STIX]{x1D6FD}_{m}^{(1)}Kn^{\,m})$ , where $m$ takes the same values as in (4.7). It should be noted that Torrilhon (2010) showed the curve of drag versus Knudsen number resulting from his analysis, but did not present a closed-form expression for this force. As $Kn\rightarrow 0$ , the drag can be computed from (4.8) $$\begin{eqnarray}\frac{F_{D}^{\ast }}{F_{\text{Stokes}}^{\ast }}=1-1.30731306Kn+\frac{3.51286149+1.41122879\unicode[STIX]{x1D6EC}}{2+\unicode[STIX]{x1D6EC}}Kn^{2}+O(Kn^{3}).\end{eqnarray}$$ Furthermore, when $Kn\rightarrow \infty$ , we have from (4.7) the expression (4.9) $$\begin{eqnarray}\displaystyle \frac{F_{D}^{\ast }}{F_{\text{S tokes }}^{\ast }} & = & \displaystyle 4.86486844\times 10^{-1}Kn^{-2}-2.50036697\times 10^{-1}Kn^{-3}\nonumber\\ \displaystyle & & \displaystyle -\,\frac{1.03874711\times 10^{-3}+6.79151242\times 10^{-2}\unicode[STIX]{x1D6EC}}{\unicode[STIX]{x1D6EC}Kn^{4}}+O(Kn^{-5}).\end{eqnarray}$$ Figure 8 shows the dimensionless drag force as a function of $Kn$ for $\unicode[STIX]{x1D6EC}=4$ and $\unicode[STIX]{x1D6EC}\rightarrow \infty$ . We added results extracted from the work by Torrilhon (2010) for an isothermal sphere. The curves match each other very well. This clearly indicates that the drag force is insensitive to changes in the solid-to-gas thermal conductivity ratio, a tendency already noted by Sone (2007) from his analysis based on the linearized Boltzmann equation in chapter 4 of his monograph. For small and large $Kn$ , this trend becomes evident in expressions (4.8) and (4.9), respectively. The matching with Torrilhon's results serves as validation of the solution method adopted in our analysis. In addition, we included predictions from Young's (2011) solution for an isothermal sphere of the G13 equations using Maxwell–Smoluchowski values for the thermal creep, velocity slip and temperature jump coefficients, as well as with the coefficients extracted by Young from the work of Sharipov (2004) (see appendix D). For the interval shown, no significant difference is noted between these two sets. We included also predictions from kinetic theory by Sone (2007) for a hard-sphere gas and an isothermal sphere. Whereas G13 over-predicts the results by Sone, the R13 tendency is to under-predict them, although with a smaller difference. For $Kn\lesssim 0.4$ , R13 approximates Sone's curve very well. For each of the models, as in the case of R13, changing $\unicode[STIX]{x1D6EC}$ from infinity to 4 produced no significant changes. Figure 8 also includes experimental data by Goldberg (1954) (extracted from Torrilhon, 2010) and by Allen & Raabe (1982) (extracted from Claydon et al., 2017). Predictions from both R13 and Sone (2007) under-predict the drag, with the former resulting in the largest difference. The thermophoretic velocity is the velocity of a moving particle driven by the thermophoretic force, when this force is balanced by the drag; that is, when the net force on the particle vanishes. Setting $F_{T}^{\ast }+F_{D}^{\ast }=0$ , and using expression (4.2) for $F_{T}^{\ast }$ , we readily have for the dimensionless velocity (4.10) $$\begin{eqnarray}Ma=-\frac{\unicode[STIX]{x1D6F7}}{\unicode[STIX]{x1D6F9}}Ep\,Kn,\end{eqnarray}$$ expressed as the pseudo Mach number. Previously, $Ma$ was a prescribed parameter and now it is the unknown. Here, $\unicode[STIX]{x1D6F9}$ , a dimensionless quantity introduced by Young (2011), is equal to $6\unicode[STIX]{x03C0}F_{D}^{\ast }/F_{\text{Stokes}}^{\ast }$ . In general, it is a function of $Kn$ and $\unicode[STIX]{x1D6EC}$ . The expression for $\unicode[STIX]{x1D6F9}$ from the G13 theory is given in appendix D. With (4.3) and (4.7) for the non-dimensional thermophoretic force and drag, respectively, relation (4.10) provides an expression for the thermophoretic velocity according to R13. Figure 9 shows results from this expression normalized with factor $\sqrt{\unicode[STIX]{x03C0}/2}\,Ep\,Kn$ as a function of $\sqrt{\unicode[STIX]{x03C0}/2}\,Kn$ for different values of $\unicode[STIX]{x1D6EC}$ . This figure includes predictions from G13 and from solutions of model Boltzmann equations (Beresnev & Chernyak 1995; Sone 2007), and from the combination in (4.10) of Waldmann's formula for $\unicode[STIX]{x1D6F7}$ and Epstein's formula for $\unicode[STIX]{x1D6F9}$ intended for the free-molecule regime of large $Kn$ (see appendix D). The velocity from this model is independent of $\unicode[STIX]{x1D6EC}$ . From figure 9 we note that R13 predicts the correct values given by the model Boltzmann equations for $\sqrt{\unicode[STIX]{x03C0}/2}\,Kn\lesssim 0.2$ . For greater values of $\sqrt{\unicode[STIX]{x03C0}/2}\,Kn$ , the agreement deteriorates. For $Kn\rightarrow \infty$ , R13 predicts a constant value, as is the case with the Waldmann–Epstein formula and with the results from kinetic theory, but with a smaller magnitude. G13, on the other hand, although also tending to a constant value, shows significant discrepancies with the other theories considered. Even though R13 is outside its limits of applicability for $Kn>1$ , it provides a reasonable engineering tool in that it has the correct asymptotic behaviour and, quantitatively, its differences with the benchmark models from kinetic theory and Waldmann–Epstein's formula are not significantly large. In this work, we investigated theoretically an instance of the phenomenon of thermophoresis, a term that refers to the forces on and motions of objects caused by temperature gradients when these objects are exposed to rarefied gases. In particular, we considered the problem of thermophoresis of a spherical particle, obtaining an analytical solution by solving the R13 moment equations, a model that provides a macroscopic description of rarefied gas flows up to the transition regime for Knudsen numbers smaller than one. Besides writing expressions for the field variables describing the fluid flow and heat transfer problem, by integration of the total stress on the surface of the sphere, we obtained a closed-form expression for the thermophoretic force as a function of the Knudsen number, dimensionless temperature gradient (Epstein number), and solid-to-gas heat conductivity ratio. Employing the closed-form expressions for the field variables in the gas obtained here, we plotted their profiles in a neighbourhood of the sphere alongside predictions from a model Boltzmann equation from the literature. This comparison revealed that up to a Knudsen number of about $0.1$ , based on the sphere's radius, the agreement between these solutions is very good thereby showing that the solution based on R13 is capable of fully describing the Knudsen layer for this particular physical situation in this range of Knudsen numbers. We also showed contour plots of speed and temperature including velocity and heat flux streamlines for various conditions. These figures exhibit the interplay between the various mechanisms involved in the complex process of thermophoresis, namely, the more common gas thermal creep toward the hot region caused by a temperature gradient in the gas parallel to the solid surface and, under very specific conditions in the slip regime that include a highly thermally conductive solid, the reversal of the flow direction resulting from the now dominant mechanism of slip flow driven by thermal stresses. We extended the same modelling approach with the R13 equations to obtain an analytical solution for the drag by considering the problem of a heat-conducting spherical particle in a uniform, slow gas stream, thereby extending the analysis by Torrilhon (2010) for an isothermal sphere. Predictions from the new solution are insensitive to changes in the solid-to-gas thermal conductivity ratio and agree very well with the results of Torrilhon, who used a different method to obtain his solution. For Knudsen numbers smaller than $0.4$ , R13 approximates reasonably well kinetic theory results for hard spheres. We then computed the thermophoretic velocity of the spherical particle when this can move driven by the thermophoretic force. The expression for the velocity results from balancing the thermophoretic force with the drag resistance exerted by the surrounding gas on the sphere. For Knudsen numbers smaller than $0.2$ , approximately, values for the thermophoretic velocity from R13 show good agreement with those from two models of the Boltzmann equation. Results from the new thermophoretic force model derived with the R13 equations were compared with results from G13 and from various models based on the linearized Boltzmann equation, such as BGK and the S model, and with data from very recent experiments, for a wide range of Knudsen numbers. For Knudsen numbers below approximately $0.1$ , R13 results match predictions from the model Boltzmann equations. In this interval, the various theories considered, including R13, under-predict significantly the experimental measurements of reversed thermophoretic force for a metallic sphere (highly thermally conductive). For Knudsen numbers lying between $0.1$ and $1$ , approximately, the graphs from R13 follow qualitatively the experimental curves for both low and high solid-to-gas thermal conductivity ratios, and in general its predictions show, although important, the least differences with the experimental data. Surprisingly, solutions of models of the Boltzmann equation taken as benchmark demonstrate larger discrepancies with the experiments. There are more involved macroscopic models based on the moments method, such as the R26 equations of Gu & Emerson (2009), that exhibit higher accuracy in the transition regime. Nevertheless, modelling phenomena in rarefied gas dynamics with the R13 equations may provide perhaps the best compromise between a sufficiently complex mathematical description capable of capturing the most significant features not only in the bulk but also near the boundaries (Knudsen layer) and an analytically tractable set of equations. In any case, the exact solution presented here can be useful in the step of validation of numerical tools developed for the R13 equations simulating flow and heat transfer phenomena occurring either in the interior of or external to complicated geometries. We acknowledge the support of the Engineering and Physical Sciences Research Council (grant nos EP/N016602/1, EP/P020887/1, and EP/P031684/1) and the Leverhulme Trust (Research Project Grant). We also acknowledge and are thankful for informative communications from R. Bosworth and A. Ketsdever from the University of Colorado, Colorado Springs, regarding their experiments. We are grateful to Dr L. Gibelli at the University of Warwick for fruitful discussions. Appendix A. Multipole potentials We define in this appendix some concepts from the theory of multipole potentials and briefly review some of its properties relevant to the work presented in the main body of this paper. For a more detailed discussion of the fundamentals of the method and its application to a variety of problems, the reader is referred to the textbooks by Leal (2007, chap. 8) – who names it as the method of superposition of vector harmonic functions – and Hess (2015, chap. 10). The material included in this review is taken from the latter. The multipole potentials can be defined as tensorial solutions of the Laplace equation. There are two classes of multipole potentials, namely, descending and ascending potentials. The descending multipole potentials tend to zero when $r\rightarrow \infty$ and diverge when $r\rightarrow 0$ , where $r\equiv |\text{x}|$ and $r^{2}=x_{m}x_{m}$ . The descending multipole potentials are defined by (A 1) $$\begin{eqnarray}X_{ij\cdots \ell }\equiv (-1)^{\ell }\frac{\unicode[STIX]{x2202}^{\ell }}{\unicode[STIX]{x2202}x_{i}\unicode[STIX]{x2202}x_{j}\cdots \unicode[STIX]{x2202}x_{\ell }}r^{-1}=-\frac{\unicode[STIX]{x2202}}{\unicode[STIX]{x2202}x_{\ell }}X_{ij\cdots (\ell -1)}.\end{eqnarray}$$ The first two descending potentials are given by (A 2) $$\begin{eqnarray}X_{0}=r^{-1}\end{eqnarray}$$ (A 3) $$\begin{eqnarray}X_{i}=-\frac{\unicode[STIX]{x2202}}{\unicode[STIX]{x2202}x_{i}}X_{0}=r^{-3}x_{i},\end{eqnarray}$$ with the second potential being known as the dipole potential. The quadrupole and octupole potential tensors are (A 4) $$\begin{eqnarray}X_{ij}=3r^{-5}(x_{i}x_{j}-r^{2}\unicode[STIX]{x1D6FF}_{ij}/3),\end{eqnarray}$$ (A 5) $$\begin{eqnarray}X_{ijk}=15r^{-7}x_{i}x_{j}x_{k}-3r^{-5}(x_{i}\unicode[STIX]{x1D6FF}_{jk}+x_{j}\unicode[STIX]{x1D6FF}_{ik}+x_{k}\unicode[STIX]{x1D6FF}_{ij}),\end{eqnarray}$$ respectively. The rank-four multipole potential tensor is (A 6) $$\begin{eqnarray}\displaystyle X_{ijk\ell } & = & \displaystyle 105r^{-9}x_{i}x_{j}x_{k}x_{\ell }-15r^{-7} (x_{j}x_{k}\unicode[STIX]{x1D6FF}_{i\ell }+x_{i}x_{k}\unicode[STIX]{x1D6FF}_{j\ell }+x_{i}x_{j}\unicode[STIX]{x1D6FF}_{k\ell }\nonumber\\ \displaystyle & & \displaystyle +\,x_{i}x_{\ell }\unicode[STIX]{x1D6FF}_{jk}+x_{j}x_{\ell }\unicode[STIX]{x1D6FF}_{ik}+x_{k}x_{\ell }\unicode[STIX]{x1D6FF}_{ij} )+3r^{-5}(\unicode[STIX]{x1D6FF}_{i\ell }\unicode[STIX]{x1D6FF}_{jk}+\unicode[STIX]{x1D6FF}_{j\ell }\unicode[STIX]{x1D6FF}_{ik}+\unicode[STIX]{x1D6FF}_{k\ell }\unicode[STIX]{x1D6FF}_{ij}).\end{eqnarray}$$ Multipole potentials satisfy (A 7) $$\begin{eqnarray}\unicode[STIX]{x0394}(g(r)X_{ij\cdots \ell })=(g^{\prime \prime }-2\ell r^{-1}g^{\prime })X_{ij\cdots \ell }=r^{2\ell }(r^{-2\ell }g^{\prime })^{\prime }X_{ij\cdots \ell }.\end{eqnarray}$$ This property is helpful in solving differential equations with elliptic operators. Multipole potential tensors are symmetric in any pair of indexes and, because they solve Laplace equation, also vanish after contracting any pair of indexes. Finally, for problems in interior domains, the so-called ascending multipoles are needed. They arise because in (A 7), the factor $g^{\prime \prime }-2\ell r^{-1}g^{\prime }=r^{2\ell }(r^{-2\ell }g^{\prime })^{\prime }=0$ not only for $g=1$ – leading to the descending multipoles – but also for $g=r^{(2\ell +1)}$ . Therefore, Laplace equation is also solved by (A 8) $$\begin{eqnarray}r^{(2\ell +1)}X_{ij\cdots \ell },\end{eqnarray}$$ which are known as ascending multipole potentials. They are zero at $r=0$ and, for $\ell >0$ , diverge when $r\rightarrow \infty$ . Appendix B. Solution of the ordinary differential equations Consider the real-valued function $\unicode[STIX]{x1D711}(x)$ . The ordinary differential equations in (3.20a ) and (3.20c ) can be represented in the generic form (B 1) $$\begin{eqnarray}x\unicode[STIX]{x1D711}^{\prime \prime }-2n\unicode[STIX]{x1D711}^{\prime }-\unicode[STIX]{x1D706}^{2}x\unicode[STIX]{x1D711}=0,\end{eqnarray}$$ where $n=0,1,2,\ldots$ and $\unicode[STIX]{x1D706}$ is a given parameter. The exact solution of this differential equation can be extracted from the handbook of solutions of ordinary differential equations by Zaitsev & Polyanin (2002) (page 219). It can be written as (B 2) $$\begin{eqnarray}\unicode[STIX]{x1D711}=x^{n+1/2}[{\mathcal{A}}_{0}J_{n+1/2}(\text{i}\unicode[STIX]{x1D706}x)+{\mathcal{B}}_{0}Y_{n+1/2}(\text{i}\unicode[STIX]{x1D706}x)],\end{eqnarray}$$ where $J_{n+1/2}$ and $Y_{n+1/2}$ are the Bessel functions of half-integer order of the first and second kind, respectively, $i$ is the imaginary number, and ${\mathcal{A}}_{0}$ and ${\mathcal{B}}_{0}$ are arbitrary constants. With the relations (e.g. Arfken et al., 2012, chap. 14; or Abramowitz & Stegun, 1972, chap. 9) (B 3) $$\begin{eqnarray}J_{\unicode[STIX]{x1D708}}(\text{i}x)=i^{\unicode[STIX]{x1D708}}I_{\unicode[STIX]{x1D708}}(x),\end{eqnarray}$$ (B 4) $$\begin{eqnarray}Y_{\unicode[STIX]{x1D708}}(\text{i}x)=i^{\unicode[STIX]{x1D708}+1}I_{\unicode[STIX]{x1D708}}(x)-\frac{2}{\unicode[STIX]{x03C0}}i^{-\unicode[STIX]{x1D708}}K_{\unicode[STIX]{x1D708}}(x),\end{eqnarray}$$ where $I_{\unicode[STIX]{x1D708}}$ and $K_{\unicode[STIX]{x1D708}}$ are the modified Bessel functions of the first and second kind, respectively, and $\unicode[STIX]{x1D708}$ may be a complex number, and introducing the modified spherical Bessel functions (notice the different scaling factors in these definitions) (B 5) $$\begin{eqnarray}\displaystyle & & \displaystyle i_{n}(x)=\sqrt{\frac{\unicode[STIX]{x03C0}}{2x}}I_{n+1/2}(x),\end{eqnarray}$$ (B 6) $$\begin{eqnarray}\displaystyle & & \displaystyle k_{n}(x)=\sqrt{\frac{2}{\unicode[STIX]{x03C0}x}}K_{n+1/2}(x),\end{eqnarray}$$ we can write solution (B 2) as (B 7) $$\begin{eqnarray}\unicode[STIX]{x1D711}=x^{n+1}[\,{\mathcal{A}}_{0}i_{n}(\unicode[STIX]{x1D706}x)+{\mathcal{B}}_{0}k_{n}(\unicode[STIX]{x1D706}x)].\end{eqnarray}$$ This result indicates that the substitution $\unicode[STIX]{x1D711}(x)=x^{\unicode[STIX]{x1D700}}\tilde{\unicode[STIX]{x1D711}}(\unicode[STIX]{x1D706}x)$ in (B 1) with the choice $\unicode[STIX]{x1D700}=n+1$ leads to the modified spherical Bessel differential equation for $\tilde{\unicode[STIX]{x1D711}}$ , an expression that, unlike (B 1), is rather well-known. Because $i_{n}(x)$ grows unbounded whereas $k_{n}(x)$ tends to zero when $x\rightarrow \infty$ , we set ${\mathcal{A}}_{0}=0$ in order to use (B 7) to represent the solutions of (3.20a ) and (3.20c ) in the main body of the document. Finally, we can write $k_{n}$ in terms of elementary functions with the relations (B 8) $$\begin{eqnarray}\displaystyle & \displaystyle k_{0}(x)=\frac{\exp (-x)}{x}, & \displaystyle\end{eqnarray}$$ (B 9) $$\begin{eqnarray}\displaystyle & \displaystyle k_{1}(x)=\exp (-x)\left(\frac{1}{x}+\frac{1}{x^{2}}\right), & \displaystyle\end{eqnarray}$$ (B 10) $$\begin{eqnarray}\displaystyle & \displaystyle k_{2}(x)=\exp (-x)\left(\frac{1}{x}+\frac{3}{x^{2}}+\frac{3}{x^{3}}\right), & \displaystyle\end{eqnarray}$$ which can be extended with the recurrence formula $k_{n-1}(x)-k_{n+1}(x)=-(2n+1)k_{n}(x)/x$ (Arfken et al. 2012). Using (B 9) and (B 10), we can obtain the expressions in (3.21a ) and (3.21c ), respectively. Appendix C. Expressions for trace-free symmetric tensors For a rank-two tensor $\text{Q}$ , the associated symmetric trace-free tensor is given by (C 1) $$\begin{eqnarray}\overline{\text{Q}}={\textstyle \frac{1}{2}}(\text{Q}+\text{e}_{\unicode[STIX]{x1D6FC}}\boldsymbol{\cdot }\text{Q}\text{e}_{\unicode[STIX]{x1D6FC}})-{\textstyle \frac{1}{3}}\text{Q}\boldsymbol{ : }\mathbf{1}\mathbf{1},\end{eqnarray}$$ where $\mathbf{1}$ is the (rank-two) identity tensor and $\text{e}_{\unicode[STIX]{x1D6FC}}$ , with $\unicode[STIX]{x1D6FC}=$ 1, 2 and 3, denotes a set of orthonormal basis vectors. A repeated subscript in a term implies summation. Note that $\text{e}_{\unicode[STIX]{x1D6FC}}\boldsymbol{\cdot }\text{Q}\text{e}_{\unicode[STIX]{x1D6FC}}$ is just the transpose of $\text{Q}$ and $\text{Q}:\mathbf{1}$ its trace. For a rank-three tensor $\text{A}$ , the corresponding symmetric tensor $\widetilde{\text{A}}$ is (C 2) $$\begin{eqnarray}\widetilde{\text{A}}={\textstyle \frac{1}{6}}(\text{A}+\text{e}_{\unicode[STIX]{x1D6FC}}\boldsymbol{\cdot }\text{A}\boldsymbol{\cdot }\text{e}_{\unicode[STIX]{x1D6FD}}\text{e}_{\unicode[STIX]{x1D6FC}}\text{e}_{\unicode[STIX]{x1D6FD}}+\text{e}_{\unicode[STIX]{x1D6FC}}\text{e}_{\unicode[STIX]{x1D6FD}}\boldsymbol{\cdot }\text{A}\boldsymbol{\cdot }\text{e}_{\unicode[STIX]{x1D6FC}}\text{e}_{\unicode[STIX]{x1D6FD}}+\text{e}_{\unicode[STIX]{x1D6FC}}\text{e}_{\unicode[STIX]{x1D6FD}}\text{e}_{\unicode[STIX]{x1D6FC}}\boldsymbol{\cdot }\text{A}\boldsymbol{\cdot }\text{e}_{\unicode[STIX]{x1D6FD}}+\text{e}_{\unicode[STIX]{x1D6FC}}\text{A}\boldsymbol{\cdot }\text{e}_{\unicode[STIX]{x1D6FC}}+\text{e}_{\unicode[STIX]{x1D6FC}}\boldsymbol{\cdot }\text{A}\text{e}_{\unicode[STIX]{x1D6FC}}),\end{eqnarray}$$ and the corresponding symmetric trace-free tensor $\overline{\text{A}}$ is (C 3) $$\begin{eqnarray}\overline{\text{A}}=\widetilde{\text{A}}-{\textstyle \frac{1}{5}}(\widetilde{\text{A}}\boldsymbol{ : }\mathbf{1}\,\mathbf{1}+\text{e}_{\unicode[STIX]{x1D6FC}}\widetilde{\text{A}}\boldsymbol{ : }\mathbf{1}\text{e}_{\unicode[STIX]{x1D6FC}}+\mathbf{1}\widetilde{\text{A}}\boldsymbol{ : }\mathbf{1}).\end{eqnarray}$$ We have written these expressions in vector notation starting with the expressions in Cartesian index notation for symmetric and trace-free tensors given in appendix A.2 of Struchtrup (2005b ). For completeness, we show them here. The components of $\widetilde{\text{A}}$ are (C 4) $$\begin{eqnarray}\text{A}_{(ijk)}={\textstyle \frac{1}{6}}(\text{A}_{ijk}+\text{A}_{ikj}+\text{A}_{jik}+\text{A}_{jki}+\text{A}_{kij}+\text{A}_{kji}),\end{eqnarray}$$ and the components of $\overline{\text{A}}$ (C 5) $$\begin{eqnarray}\text{A}_{\langle \text{i}jk\rangle }=\text{A}_{(ijk)}-{\textstyle \frac{1}{5}}(\text{A}_{(imm)}\unicode[STIX]{x1D6FF}_{jk}+\text{A}_{(jmm)}\unicode[STIX]{x1D6FF}_{ik}+\text{A}_{(kmm)}\unicode[STIX]{x1D6FF}_{ij}).\end{eqnarray}$$ Appendix D. Thermophoretic force and drag from Grad's 13-moment method and from other approaches Following Young's (2011) work, the non-dimensional thermophoretic force and velocity drag resulting from Grad's 13-moment method G13 are given by (D 1) $$\begin{eqnarray}\displaystyle & & \displaystyle \unicode[STIX]{x1D6F7}(Kn^{\prime },\unicode[STIX]{x1D6EC})\nonumber\\ \displaystyle & & \displaystyle \quad =\frac{-12\unicode[STIX]{x03C0}[K_{tc}(1+\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime }+3\unicode[STIX]{x1D6EC}Kn^{\prime \,2}/\unicode[STIX]{x03C0})+3C_{m}Kn^{\prime }(1-\unicode[STIX]{x1D6EC}+\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime })]}{(1+3C_{m}Kn^{\prime })(2+\unicode[STIX]{x1D6EC}+2\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime }+9\unicode[STIX]{x1D6EC}Kn^{\prime \,2}/\unicode[STIX]{x03C0})-(K_{tc}+9C_{m}Kn^{\prime })6\unicode[STIX]{x1D6EC}Kn^{\prime \,2}/(5\unicode[STIX]{x03C0})},\nonumber\\ \displaystyle & & \displaystyle\end{eqnarray}$$ (D 2) $$\begin{eqnarray}\displaystyle & & \displaystyle \unicode[STIX]{x1D6F9}(Kn^{\prime },\unicode[STIX]{x1D6EC})\nonumber\\ \displaystyle & & \displaystyle \quad =\frac{6\unicode[STIX]{x03C0}[(1+2C_{m}Kn^{\prime })(2+\unicode[STIX]{x1D6EC}+2\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime })+(9-2K_{tc})\unicode[STIX]{x1D6EC}Kn^{\prime \,2}/\unicode[STIX]{x03C0}]}{(1+3C_{m}Kn^{\prime })(2+\unicode[STIX]{x1D6EC}+2\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime }+9\unicode[STIX]{x1D6EC}Kn^{\prime \,2}/\unicode[STIX]{x03C0})-(K_{tc}+9C_{m}Kn^{\prime })6\unicode[STIX]{x1D6EC}Kn^{\prime \,2}/(5\unicode[STIX]{x03C0})},\nonumber\\ \displaystyle & & \displaystyle\end{eqnarray}$$ respectively, where $Kn^{\prime }=\sqrt{\unicode[STIX]{x03C0}/2}\,Kn$ . For the results presented in this paper, the thermal creep, velocity slip and temperature jump coefficients take, respectively, the Maxwell–Smoluchowski values, namely, $K_{tc}~=~3/4$ , $C_{m}~=~1$ and $C_{e}~=~15/8$ (Young 2011). It should be said that the first closing parenthesis in the numerator of (D 1) and the factor $Kn^{\prime }$ after coefficient $C_{m}$ in the numerator of (D 2) as well as the second closing parenthesis in the denominators of both (D 1) and (D 2) are missing in Young's (2011) article (see his formulae (32a) and (32b)). When passing the limit $\unicode[STIX]{x1D6EC}\rightarrow \infty$ in (D 2), we recover the factor obtained by Lockerby & Collyer (2016) in their formula (5.5), who have already noted the typographical errors in Young's paper for $\unicode[STIX]{x1D6F9}$ . The interpolation formula presented by Young (2011) for the thermophoretic force is (D 3) $$\begin{eqnarray}\unicode[STIX]{x1D6F7}(Kn^{\prime },\unicode[STIX]{x1D6EC})=\frac{-12\unicode[STIX]{x03C0}\left[K_{tc}(1+\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime })+3C_{m}Kn^{\prime }(1-\unicode[STIX]{x1D6EC}+\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime })\right]}{[1+3Kn^{\prime }\exp (-C_{int}/Kn^{\prime })](1+3C_{m}Kn^{\prime })(2+\unicode[STIX]{x1D6EC}+2\unicode[STIX]{x1D6EC}C_{e}Kn^{\prime })},\end{eqnarray}$$ with $K_{tc}~=~1.10$ , $C_{m}~=~1.13$ , $C_{e}~=~2.17$ and $C_{int}=0.5$ . For completeness, we add the expressions used in this paper for the free-molecule regime ( $Kn^{\prime }\gg 1$ ). For the non-dimensional thermophoretic force, Waldmann (1959) obtained (D 4) $$\begin{eqnarray}\unicode[STIX]{x1D6F7}=-2\unicode[STIX]{x03C0}/Kn^{\prime },\end{eqnarray}$$ whereas, for the drag caused by a free stream past a sphere, Epstein (1924) obtained (D 5) $$\begin{eqnarray}\unicode[STIX]{x1D6F9}=\frac{8\unicode[STIX]{x03C0}}{3Kn^{\prime }}\left(1+\frac{\unicode[STIX]{x03C0}}{8}\right).\end{eqnarray}$$ These expressions are extracted from table 1 of Young's article. Abramowitz, M. & Stegun, I. 1972 Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (10th Printing). US Department of Commerce, National Bureau of Standards. Allen, M. & Raabe, O. 1982 Re-evaluation of Millikan's oil drop data for the motion of small particles in air. J. Aero. Sci. 13 (6), 537–547. Arfken, G., Weber, H. & Harris, F. 2012 Mathematical Methods for Physicists: A Comprehensive Guide, 7th edn. Academic; Elsevier. Bakanov, S. 1991 Thermophoresis in gases at small Knudsen numbers. Aerosol Sci. Technol. 15 (2), 77–92. Beresnev, S. & Chernyak, V. 1995 Thermophoresis of a spherical particle in a rarefied gas: numerical analysis based on the model kinetic equations. Phys. Fluids 7 (7), 1743–1756. Bosworth, R. W. & Ketsdever, A. D. 2016 Measurement of thermophoretic force on spheroids. In AIP Conference Proceedings, vol. 1786. AIP Publishing. Bosworth, R. W., Ventura, A., Ketsdever, A. & Gimelshein, S. 2016 Measurement of negative thermophoretic force. J. Fluid Mech. 805, 207–221. Brock, J. R. 1962 On the theory of thermal forces acting on aerosol particles. J. Colloid Sci. 17 (8), 768–780. Claydon, R., Shrestha, A., Rana, A. S., Sprittles, J. E. & Lockerby, D. A. 2017 Fundamental solutions to the regularised 13-moment equations: efficient computation of three-dimensional kinetic effects. J. Fluid Mech. 833, R4. Dwyer, H. A. 1967 Thirteen-moment theory of the thermal force on a spherical particle. Phys. Fluids 10 (5), 976–984. Epstein, P. S. 1924 On the resistance experienced by spheres in their motion through gases. Phys. Rev. 23 (6), 710–733. Epstein, P. S. 1929 Zur theorie des radiometers. Z. Phys. 54 (7–8), 537–563. Goldberg, R.1954 The slow flow of a rarefied gas past a spherical obstacle. PhD thesis, New York University. Grad, H. 1949 On the kinetic theory of rarefied gases. Commun. Pure Appl. Maths 2 (4), 331–407. Gu, X. & Emerson, D. 2007 A computational strategy for the regularized 13 moment equations with enhanced wall-boundary conditions. J. Comput. Phys. 225 (1), 263–283. Gu, X.-J. & Emerson, D. R. 2009 A high-order moment approach for capturing non-equilibrium phenomena in the transition regime. J. Fluid Mech. 636, 177–216. Hess, S. 2015 Tensors for Physics, Undergraduate Lecture Notes in Physics. Springer. Kennard, E. H. 1938 Kinetic Theory of Gases. ch. 8, McGraw-Hill. Lamb, H. 1932 Hydrodynamics, 6th edn. Dover, Articles 335 and 336. Leal, L. G. 2007 Advanced Transport Phenomena: Fluid Mechanics and Convective Transport Processes. Cambridge University Press. Lockerby, D. A. & Collyer, B. 2016 Fundamental solutions to moment equations for the simulation of microscale gas flows. J. Fluid Mech. 806, 413–436. Maxwell, J. C. 1879 On stresses in rarified gases arising from inequalities of temperature. Phil. Trans. R. Soc. Lond. A 170, 231–256. Mohammadzadeh, A., Rana, A. S. & Struchtrup, H. 2015 Thermal stress versus thermal transpiration: a competition in thermally driven cavity flows. Phys. Fluids 27 (11), 112001. Nguyen, N.-T. & Wereley, S. T. 2002 Fundamentals and Applications of Microfluidics, chap. 2, Artech House. NIST2017, National Institute of Standards and Technology, http://webbook.nist.gov/cgi/fluid.cgi?ID=C7440371&Action=Page [Online; accessed 28 January 2018]. Sharipov, F. 2004 Data on the velocity slip and temperature jump coefficients [gas mass, heat and momentum transfer]. In Thermal and Mechanical Simulation and Experiments in Microelectronics and Microsystems, 2004. EuroSimE 2004 Proceedings of the 5th International Conference on, pp. 243–249. IEEE. Sone, Y. 2007 Molecular Gas Dynamics: Theory, Techniques, and Applications. Springer Science and Business Media. Sone, Y. & Aoki, K. 1983 A similarity solution of the linearized Boltzmann equation with application to thermophoresis of a spherical particle. J. Méc. Théor. Appl. 2, 3–12. Struchtrup, H. 2005a Derivation of 13 moment equations for rarefied gas flow to second order accuracy for arbitrary interaction potentials. Multiscale Model. Simul. 3 (1), 221–243. Struchtrup, H. 2005b Macroscopic Transport Equations for Rarefied Gas Flows: Approximation Methods in Kinetic Theory. Springer. Struchtrup, H., Beckmann, A., Rana, A. S. & Frezzotti, A. 2017 Evaporation boundary conditions for the R13 equations of rarefied gas dynamics. Phys. Fluids 29 (9), 092004. Struchtrup, H. & Frezzotti, A. 2016 Evaporation/condensation boundary conditions for the regularized 13 moment equations. In AIP Conference Proceedings, vol. 1786, p. 140002. AIP Publishing. Struchtrup, H. & Taheri, P. 2011 Macroscopic transport models for rarefied gas flows: a brief review. IMA J. Appl. Maths 76 (5), 672–697. Struchtrup, H. & Torrilhon, M. 2003 Regularization of Grads 13 moment equations: derivation and linear analysis. Phys. Fluids 15 (9), 2668–2680. Struchtrup, H. & Torrilhon, M. 2008 Higher-order effects in rarefied channel flows. Phys. Rev. E 78 (4), 046301. Takata, S., Sone, Y. & Aoki, K. 1993 Thermophoresis of a spherical aerosol particle: numerical analysis based on kinetic theory of gases. J. Aero. Sci. 24, S147–S148. Talbot, L., Cheng, R., Schefer, R. & Willis, D. 1980 Thermophoresis of particles in a heated boundary layer. J. Fluid Mech. 101 (4), 737–758. Torrilhon, M. 2010 Slow gas microflow past a sphere: analytical solution based on moment equations. Phys. Fluids 22 (7), 072001. Torrilhon, M. 2016 Modeling nonequilibrium gas flow based on moment equations. Annu. Rev. Fluid Mech. 48, 429–458. Torrilhon, M. & Struchtrup, H. 2008 Boundary conditions for regularized 13-moment-equations for micro-channel-flows. J. Comput. Phys. 227 (3), 1982–2011. Tyndall, J. 1870 On dust and disease. Proc. R. Inst. 6, 1–14. Waldmann, L. 1959 Über die kraft eines inhomogenen gases auf kleine suspendierte kugeln. Z. Naturforsch. A 14 (7), 589–599. Yamamoto, K. & Ishihara, Y. 1988 Thermophoresis of a spherical particle in a rarefied gas of a transition regime. Phys. Fluids 31 (12), 3618–3624. Young, J. B. 2011 Thermophoresis of a spherical particle: reassessment, clarification, and new analysis. Aerosol Sci. Technol. 45 (8), 927–948. Zaitsev, V. F. & Polyanin, A. D. 2002 Handbook of Exact Solutions for Ordinary Differential Equations. CRC Press. Zheng, F. 2002 Thermophoresis of spherical and non-spherical particles: a review of theories and experiments. Adv. Colloid interface Sci. 97 (1), 255–278. Loading article...
CommonCrawl
Chemistry and Chemical Engineering (5) From 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1-01 — To 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1-01 Author or Editor: R. Weinreich x Sort by RelevanceArticle A-ZArticle Z-ADate - Old to RecentDate - Recent to OldAuthor A-ZAuthor Z-AJournal A-ZJournal Z-A Quality assurance of iodine-124 produced via the nuclear reaction124Te (d, 2n)124I Journal of Radioanalytical and Nuclear Chemistry Volume 213: Issue 4 https://doi.org/10.1007/bf02163571 Authors: R. Weinreich and E. Knust For more than a year,124I (T=4.15 d) has been produced routinely with a compact cyclotron by irradiation of124TeO2 with 14 MeV deuterons, followed by dry distillation of the iodine radioisotopes formed from irradiated target materials. The following by-products have been measured and compiled in each charge: 13.2-d123I, 60-d125I, 13.0-d126I, 12.4-h130I and 8.02-d131I. The data show that after 45 h decay time, the sum of the activities of these nuclides is less than 5% of the124I activity. Observation of this limit has been required by the Swiss Regulatory Agencies for a PET study of cell proliferation in human brain tumors using [124I] IUdR. Simple preparation of76Br,123I and211At labeled 5-halo-2′-deoxyuridine Authors: J. Koziorowski and R. Weinreich A fast and easy method for the preparation of radiolabeled 5-halo-2 -deoxyuridine (halo=[76Br], [123I] and [211At]) is presented. Labeling is accomplished by oxidation of the halogenide with Iodogen for [123I] and [211At], and Chloramine-T (CAT) for [76Br] followed by halodestannylation of 5-trimethylstannyl-2 -deoxyuridine (TMSUdR). The reaction takes 1 minute giving >90% yield for all three halogens. Production of18F with an18O enriched water target Authors: I. Huszár and R. Weinreich A target system for the production of nucleophilic 110-min18F from isotopically enriched /18O/H2O is described. The process occurs via the nuclear reaction18O/p,n/18F, the available proton beam of 72 MeV must be degraded to the entrance energy of 15 MeV. This process is in regular use for the batch production of 0.8–1 Ci18F. Production of123I and28Mg by high-energy nuclear reactions for applications in life sciences Authors: R. Weinreich, S. Qaim, H. Michael, and G. Stöcklin The advantages of high energy cyclotrons as compared to small compact cyclotrons for the production of special radionuclides are outlined. The routine production of123I (T=13.3 h) and28Mg (T=21.1 h) by means of high energy nuclear reactions at the Jülich Isochronous Cyclotron is described. The reaction127I(d,6n)123 \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$Xe\xrightarrow{{\beta ^ + ,EC}}$$ \end{document} 123I at 78 to 64 MeV is used for the production of123I with thick target yields of 8 mCi/μAh and high radionuclidic purity. The practical experience in the application of this process, which is well suited for the production of Na123I and for123Xe-exposure labelling techniques, is reported.28Mg is produced by the27Al(α, 3p)28Mg reaction at Eα=140 to 30 MeV with thick target yields of 40 μCi/μAh. The carrier-free28Mg is separated from the matrix activities by coprecipitation and anion exchange with chemical yields of 80%. Determination of uranium and plutonium in shielding concrete https://doi.org/10.1023/b:jrnc.0000034866.40700.45 Authors: R. Weinreich, S. Bajo, J. Eikenberg, and F. Atchison The formation of plutonium radionuclides (239+240Pu) from uranium was determined in dismounted shielding concrete from accelerator components. Plutonium and uranium fractions were separated by radioanalytical techniques and measured by -spectroscopy. The measurements are consistent with yield calculations based on transport and single particle codes. The yield of 239+240Pu did not exceed the two-fold exemption limit given in the Swiss Radiation Protection Law, thus the plutonium content in shielding concrete should not cause problem for the environment.
CommonCrawl
Does the sound barrier apply to a silent aircraft? [duplicate] Would a perfectly silent supersonic aircraft create a sonic boom? (1 answer) For my summer duty, I was looking at the sound barrier effects with a glimmer of hope to understand them: "A supersonic aircraft is one that can travel faster that the speed of sound. As the plane approaches the speed of sound, it catches up with the sound waves traveling in front of it and pushes them up against each other. The result is a barrier of squashed air in front of the aircraft." Source Is it theoretically possible to design an aircraft totally silent or at least quiet? In that case would the problems associated with transonic/supersonic speeds disappear, or at least are reduced? As the plane approaches the speed of sound, it catches up with the sound waves traveling in front of it and pushes them up against each other. Or is there a silence barrier too? supersonic transonic minsmins $\begingroup$ An airplane makes sound just by running into air. Much of the noise you associate with a jet plane is not the engine noise, but the noise of the air being disturbed. A glider at high speed makes a sound not altogether unlike that of a jet. $\endgroup$ – Wayne Conrad Jul 17 '17 at 21:11 $\begingroup$ Related: Would a perfectly silent supersonic aircraft create a sonic boom? $\endgroup$ – Ralph J Jul 17 '17 at 22:03 $\begingroup$ An airplane which disturbs no air at all when it moves will be totally silent. $\endgroup$ – kevin Jul 18 '17 at 11:18 $\begingroup$ That graphic seems to reinforce the myth that the sonic boom only occurs at Mach 1.0. It would be nice if you included a graphic showing a 'boom carpet' instead. $\endgroup$ – Sanchises Jul 18 '17 at 16:39 The sound barrier has nothing to do with the noise the aircraft makes. It has to do with the fact that if it is going fast enough the air can't get out of the way. This creates a shock wave where the aircraft is pushing the air aside. A shock wave is where the speed of the air transitions from faster than sound to slower than sound and involves the release of considerable energy. You can control the shockwave to some extent by careful shaping of the surfaces, but a plane going fast enough can't be silent. You can make very quiet sub-sonic aircraft, but they tend to go slowly so that the noise of the air flowing over the craft is minimized. zeta-bandzeta-band $\begingroup$ I wanted to add here that the shuttle produced sonic booms despite having no power during reentry $\endgroup$ – Eugene Styer Jul 17 '17 at 22:48 I'll try to keep this really simple, as OP's question leads me to believe they are not familiar with the physics involved. As indicated by others here, the sound barrier is not directly related to noise, but rather to the nature of sound itself: Sound is just a pressure wave moving through the air. In turn, the speed of sound is the speed at which a disturbance (pressure wave) can propagate through air. Us hearing these disturbances (or not, there is a lot of sound that escapes us) is just an evolutionary adaptation. For air to "move aside" to let an object like a plane fly through, it must sense it aproaching. Aircraft disturb air a good distance in front of them, this may sound a bit odd the first time you hear it. If the plane is faster than the speed of sound, the air cannot "move aside" in the usual manner, as the aircraft outruns the pressure wave that would have moved the air out of the way in time. This is why fast aircraft measure their speed using the Mach number to show how fast they are in relation to sound, as this becomes the most important factor once you approach $M_{\infty}=1.0$ The end effect is that a supersonic aircraft must then force the air out of its way in a more forceful manner, compressing and accelerating it to its own speed. This phenomenon is commonly called the sound barrier. As an added point of interest, a similar phenomenon happens at the rear of an aircraft flying over the speed of sound, the airflow cannot adapt to the end of the body fast enough, creating a low pressure area. AEhere supports MonicaAEhere supports Monica $\begingroup$ Air doesn't really "sense" anything. What is happening is a pressure wave is created in front of the aircraft as it pushes the air out of the way. This is called a "bow shock" and is the same principle as a boat pushing water out of the way ahead of the boat. $\endgroup$ – Ron Beyer Jul 18 '17 at 15:47 $\begingroup$ @Ron Anthropomorphic explanations are usually perfectly acceptable. Air does not sense, but it is disturbed by pressure propogating at the speed of sound - much like you do not sense sound but your eardrums are displaced by the pressure. $\endgroup$ – Sanchises Jul 18 '17 at 16:43 $\begingroup$ As both aircraft displacement and aircraft noise create a pressure wave which propagate at the "speed of sound", I imagine both participate to the creation of the shock wave. What is missing in your answer is in which proportion, and how much the shock wave energy would be reduced if the aircraft sound was reduced. This may be not significant, but that what the question is about. $\endgroup$ – mins Jul 18 '17 at 18:17 Not the answer you're looking for? Browse other questions tagged supersonic transonic or ask your own question. Would a perfectly silent supersonic aircraft create a sonic boom? Why are swept wings better for breaking the sound barrier? Can turboprop blades break the sound barrier? What does happen when a supersonic aircraft passes another supersonic aircraft, regarding sonic booms? Do jets breaking the sound barrier leave shredded metal on the ground? Is it possible to hear a sonic boom when the aircraft is exactly at Mach one? Acceleration of a supersonic aircraft after breaking the sound barrier Can you feel passing through the sound barrier in an F-16?
CommonCrawl
A "Tits-alternative" for subgroups of surface mapping class groups by John McCarthy PDF It has been observed that surface mapping class groups share various properties in common with the class of linear groups (e.g., $[\mathbf {BLM}, \mathbf {H}]$). In this paper, the known list of such properties is extended to the "Tits-Alternative", a property of linear groups established by J. Tits $[\mathbf {T}]$. In fact, we establish that every subgroup of a surface mapping class group is either virtually abelian or contains a nonabelian free group. In addition, in order to establish this result, we develop a theory of attractors and repellers for the action of surface mapping classes on Thurston's projective lamination spaces $[\mathbf {Th1}]$. This theory generalizes results known for pseudo-Anosov mapping classes $[\mathbf {FLP}]$. Hyman Bass and Alexander Lubotzky, Automorphisms of groups and of schemes of finite type, Israel J. Math. 44 (1983), no. 1, 1–22. MR 693651, DOI 10.1007/BF02763168 Joan S. Birman, Braids, links, and mapping class groups, Annals of Mathematics Studies, No. 82, Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1974. MR 0375281 Joan S. Birman, The algebraic structure of surface mapping class groups, Discrete groups and automorphic functions (Proc. Conf., Cambridge, 1975) Academic Press, London, 1977, pp. 163–198. MR 0488019 Joan S. Birman, Alex Lubotzky, and John McCarthy, Abelian and solvable subgroups of the mapping class groups, Duke Math. J. 50 (1983), no. 4, 1107–1120. MR 726319, DOI 10.1215/S0012-7094-83-05046-9 Travaux de Thurston sur les surfaces, Astérisque, vol. 66, Société Mathématique de France, Paris, 1979 (French). Séminaire Orsay; With an English summary. MR 568308 Jane Gilman, On the Nielsen type and the classification for the mapping class group, Adv. in Math. 40 (1981), no. 1, 68–96. MR 616161, DOI 10.1016/0001-8708(81)90033-5 John L. Harer, Stability of the homology of the mapping class groups of orientable surfaces, Ann. of Math. (2) 121 (1985), no. 2, 215–249. MR 786348, DOI 10.2307/1971172 R. C. Penner and J. L. Harer, Combinatorics of train tracks, Annals of Mathematics Studies, vol. 125, Princeton University Press, Princeton, NJ, 1992. MR 1144770, DOI 10.1515/9781400882458 W. J. Harvey, Geometric structure of surface mapping class groups, Homological group theory (Proc. Sympos., Durham, 1977) London Math. Soc. Lecture Note Ser., vol. 36, Cambridge Univ. Press, Cambridge-New York, 1979, pp. 255–269. MR 564431 A. Hatcher and W. Thurston, A presentation for the mapping class group of a closed orientable surface, Topology 19 (1980), no. 3, 221–237. MR 579573, DOI 10.1016/0040-9383(80)90009-9 Steven P. Kerckhoff, The Nielsen realization problem, Ann. of Math. (2) 117 (1983), no. 2, 235–265. MR 690845, DOI 10.2307/2007076 J. McCarthy, Normalizers and centralizers of pseudo-Anosov mapping classes, preprint available upon request. —, Subgroups of surface mapping class groups, Ph.D. thesis, Columbia University, May, 1983. J. Morgan, Train tracks and geodesic laminations, Columbia University Lecture Notes (to appear). Jakob Nielsen, Surface transformation classes of algebraically finite type, Danske Vid. Selsk. Mat.-Fys. Medd. 21 (1944), no. 2, 89. MR 15791 R. Penner, A computation of the action of the mapping class group on isotopy classes of curves and arcs in surfaces, Ph.D. thesis, Massachusetts Institute of Technology, 1982. W. P. Thurston, The geometry and topology of $3$-manifolds, Princeton Univ. Lecture Notes (to appear). —, On the geometry and dynamics of diffeomorphisms of surfaces, preprint. William P. Thurston, Three-dimensional manifolds, Kleinian groups and hyperbolic geometry, Bull. Amer. Math. Soc. (N.S.) 6 (1982), no. 3, 357–381. MR 648524, DOI 10.1090/S0273-0979-1982-15003-0 —, Lectures Notes, Boulder, Colorado, 1980. —, Hyperbolic structures on $3$-manifolds. II, preprint, July, 1980. J. Tits, Free subgroups in linear groups, J. Algebra 20 (1972), 250–270. MR 286898, DOI 10.1016/0021-8693(72)90058-0 Retrieve articles in Transactions of the American Mathematical Society with MSC: 57M99, 20F38, 57N05 Retrieve articles in all journals with MSC: 57M99, 20F38, 57N05 MSC: Primary 57M99; Secondary 20F38, 57N05
CommonCrawl
On the Betti numbers of level sets of solutions to elliptic equations DCDS Home On formation of singularity for non-isentropic Navier-Stokes equations without heat-conductivity August 2016, 36(8): 4495-4516. doi: 10.3934/dcds.2016.36.4495 Global well-posedness of strong solutions to a tropical climate model Jinkai Li 1, and Edriss Titi 2, Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel Department of Mathematics, Texas A&M University, 3368-TAMU, College Station, TX 77843-3368, United States Received April 2015 Revised October 2015 Published March 2016 In this paper, we consider the Cauchy problem to the TROPICAL CLIMATE MODEL derived by Frierson--Majda--Pauluis in [15], which is a coupled system of the barotropic and baroclinic modes of the velocity and the typical midtropospheric temperature. The system considered in this paper has viscosities in the momentum equations, but no diffusivity in the temperature equation. We establish here the global well-posedness of strong solutions to this model. In proving the global existence of strong solutions, to overcome the difficulty caused by the absence of the diffusivity in the temperature equation, we introduce a new velocity $w$ (called the pseudo baroclinic velocity), which has more regularities than the original baroclinic mode of the velocity. An auxiliary function $\phi$, which looks like the effective viscous flux for the compressible Navier-Stokes equations, is also introduced to obtain the $L^\infty$ bound of the temperature. Regarding the uniqueness, we use the idea of performing suitable energy estimates at level one order lower than the natural basic energy estimates for the system. Keywords: strong solutions, primitive equations, global well-posedness, Tropical atmospheric dynamics, logarithmic Gronwall inequality.. Mathematics Subject Classification: Primary: 35D35, 76D03; Secondary: 86A1. Citation: Jinkai Li, Edriss Titi. Global well-posedness of strong solutions to a tropical climate model. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4495-4516. doi: 10.3934/dcds.2016.36.4495 H. Brézis and T. Gallouet, Nonlinear Schrödinger evolution equations,, Nonlinear Anal., 4 (1980), 677. doi: 10.1016/0362-546X(80)90068-1. Google Scholar H. Brézis and S. Wainger, A Note on limiting cases of Sobolev embeddings and convolution inequalities,, Comm. Partial Differential Equations, 5 (1980), 773. doi: 10.1080/03605308008820154. Google Scholar C. Cao, S. Ibrahim, K. Nakanishi and E. S. Titi, Finite-time blowup for the inviscid primitive equations of oceanic and atmospheric dynamics,, Comm. Math. Phys., 337 (2015), 473. doi: 10.1007/s00220-015-2365-1. Google Scholar C. Cao, J. Li and E. S. Titi, Local and global well-posedness of strong solutions to the 3D primitive equations with vertical eddy diffusivity,, Arch. Rational Mech. Anal., 214 (2014), 35. doi: 10.1007/s00205-014-0752-y. Google Scholar C. Cao, J. Li and E. S. Titi, Global well-posedness of strong solutions to the 3D primitive equations with horizontal eddy diffusivity,, J. Differential Equations, 257 (2014), 4108. doi: 10.1016/j.jde.2014.08.003. Google Scholar C. Cao, J. Li and E. S. Titi, Global well-posedness of the 3D primitive equations with only horizontal viscosity and diffusivity,, Comm. Pure Appl. Math., (). doi: 10.1002/cpa.21576. Google Scholar C. Cao, J. Li and E. S. Titi, Strong solutions to the 3D primitive equations with horizontal dissipation: near $H^1$ initial data,, preprint., (). Google Scholar C. Cao, J. Li and E. S. Titi, Global well-posedness of the 3D primitive equations with horizontal viscosities and vertical diffusion,, preprint., (). Google Scholar C. Cao and E. S. Titi, Global well-posedness of the three-dimensional viscous primitive equations of large scale ocean and atmosphere dynamics,, Ann. of Math., 166 (2007), 245. doi: 10.4007/annals.2007.166.245. Google Scholar C. Cao and E. S. Titi, Global well-posedness of the 3D primitive equations with partial vertical turbulence mixing heat diffusion,, Comm. Math. Phys., 310 (2012), 537. doi: 10.1007/s00220-011-1409-4. Google Scholar R. R. Coifman, R. Rochberg and G. Weiss, Factorization theorems for Hardy spaces in several variables,, Ann. of Math., 103 (1976), 611. doi: 10.2307/1970954. Google Scholar R. R. Coifman and Y. Meyer, On commutators of singular integrals and bilinear singular integrals,, Trans. Amer. Math. Soc., 212 (1975), 315. doi: 10.1090/S0002-9947-1975-0380244-8. Google Scholar L. C. Evans, Partial Differential Equations,, $2^{nd}$ edition, (2010). doi: 10.1090/gsm/019. Google Scholar E. Feireisl and A. Novotný, Singular Limits in Thermodynamics of Viscous Fluids,, Advances in Mathematical Fluid Mechanics, (2009). doi: 10.1007/978-3-7643-8843-0. Google Scholar D. M. W. Frierson, A. J. Majda and O. M. Pauluis, Large scale dynamics of precipitation fronts in the tropical atmosphere: a novel relaxation limit,, Commun. Math. Sci., 2 (2004), 591. doi: 10.4310/CMS.2004.v2.n4.a3. Google Scholar A. E. Gill, Some simple solutions for heat-induced tropical circulation,, Quart. J. Roy. Meteor. Soc., 106 (1980), 447. doi: 10.1002/qj.49710644905. Google Scholar G. M. Kobelkov, Existence of a solution in the large for the 3D large-scale ocean dynamics equations,, C. R. Math. Acad. Sci. Paris, 343 (2006), 283. doi: 10.1016/j.crma.2006.04.020. Google Scholar I. Kukavica and M. Ziane, The regularity of solutions of the primitive equations of the ocean in space dimension three,, C. R. Math. Acad. Sci. Paris, 345 (2007), 257. doi: 10.1016/j.crma.2007.07.025. Google Scholar I. Kukavica and M. Ziane, On the regularity of the primitive equations of the ocean,, Nonlinearity, 20 (2007), 2739. doi: 10.1088/0951-7715/20/12/001. Google Scholar A. Larios, E. Lunasin and E. S. Titi, Global well-posedness for the 2D Boussinesq system with anisotropic viscosity and without heat diffusion,, J. Differential Equations, 255 (2013), 2636. doi: 10.1016/j.jde.2013.07.011. Google Scholar J. Li and E. S. Titi, Global well-posedness of the 2D Boussinesq equations with vertical dissipation,, Arch. Ration. Mech. Anal., 220 (2016), 983. doi: 10.1007/s00205-015-0946-y. Google Scholar J. Li, E. S. Titi and Z. Xin, On the uniqueness of weak solutions to the Ericksen-Leslie liquid crystal model in $\mathbb R^2$,, Math. Models Methods Appl. Sci., 26 (2016), 803. doi: 10.1142/S0218202516500184. Google Scholar J. L. Lions, R. Temam and S. Wang, New formulations of the primitive equations of atmosphere and applications,, Nonlinearity, 5 (1992), 237. doi: 10.1088/0951-7715/5/2/001. Google Scholar J. L. Lions, R. Temam and S. Wang, On the equations of the large-scale ocean,, Nonlinearity, 5 (1992), 1007. doi: 10.1088/0951-7715/5/5/002. Google Scholar J. L. Lions, R. Temam and S. Wang, Mathematical theory for the coupled atmosphere-ocean models (CAO III),, J. Math. Pures Appl., 74 (1995), 105. Google Scholar A. J. Majda and J. A. Biello, The nonlinear interaction of barotropic and equatorial baroclinic Rossby waves,, J. Atmos. Sci., 60 (2003), 1809. doi: 10.1175/1520-0469(2003)060<1809:TNIOBA>2.0.CO;2. Google Scholar T. Matsuno, Quasi-geostrophic motions in the equatorial area,, J. Meteor. Soc. Japan, 44 (1966), 25. Google Scholar T. K. Wong, Blowup of solutions of the hydrostatic Euler equations,, Proc. Amer. Math. Soc., 143 (2015), 1119. doi: 10.1090/S0002-9939-2014-12243-X. Google Scholar Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142 Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361 Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161 Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302 Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248 Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377 Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163 José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091 Yi Guan, Michal Fečkan, Jinrong Wang. Periodic solutions and Hyers-Ulam stability of atmospheric Ekman flows. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1157-1176. doi: 10.3934/dcds.2020313 Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015 Amru Hussein, Martin Saal, Marc Wrona. Primitive equations with horizontal viscosity: The initial value and The time-periodic problem for physical boundary conditions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020398 Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331 Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021011 Shujing Shi, Jicai Huang, Yang Kuang. Global dynamics in a tumor-immune model with an immune checkpoint inhibitor. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1149-1170. doi: 10.3934/dcdsb.2020157 Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 Yueh-Cheng Kuo, Huey-Er Lin, Shih-Feng Shieh. Asymptotic dynamics of hermitian Riccati difference equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020365 Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296 Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 PDF downloads (101) HTML views (0) Jinkai Li Edriss Titi
CommonCrawl
Can there be a context free language that is not recognizable by a PEG? This is related to this question. Essentially, I want to know whether my reasoning is correct. We know that parsing with a context free grammar is same as boolean matrix multiplication (forward: Valient 1975, backward: Lee et al. 2002), and the latter has a lower bound of O(n^2) for arbitrary matrices. If so, then there should exist a context free language $L$ such that any context free grammar that can represent it would take $O(n^2)$ for matching a string. This is because say there existed a grammar $G_n$ that allowed matches faster than $O(n^2)$ for any given CFL, then that grammar would allow faster multiplication for the corresponding Boolean matrices. Since Valient et al. and Lee et al. together shows that multiplication of BMs are same as parsing with a CFG, any BM can then be multiplied faster than $O(n^2)$, which is lower than the lower bound by theory. So there should exist a context free language $L$ such that it takes at least O(n^2) time for checking membership in $L$. PEGs are known to require only linear time (Birman and Ullman 1970), (Loff et al. 2019). If there exist a PEG for $L$, it would be a recognizer that checks the membership in linear time, and hence, can solve matrix multiplication in linear time. Hence, there does not exist a PEG for $L$. Valient 1975 context-free recognition, for $n$ character input strings, can be carried out at least as fast as multiplication for $n \times n$ Boolean matrices Lee et al. 2002 Any CFG parser with time complexity $O(gn^{3-\epsilon})$, where $g$ isthe size of the grammar and $n$ is the length of the input string, can be efficiently converted into an algorithm to multiply $m\times m$ Boolean matrices in time $O(m^{3-\epsilon/3})$. Loff et al. 2019 In fact, the only method we know to prove that a language has no PEG is by using the time-hierarchy theorem of complexity theory: using diagonalisation one may construct some language $L_2$ which is decidable,say, in time $n^2$ (by a random-access machine), but not in linear time, and because PEGs can be recognised in linear time using the tabular parsing algorithm of Birman and Ullman [2] (or packrat parsing [32,33]), there will be no parsing expression grammar for $L_2$. formal-languages context-free parsers rahulrahul $\begingroup$ @rahul: evidently, there are CFGs which can be parsed in linear time, despite Lee et al. Certainly, super-linear grammars exist. But you want to prove that there are languages which have no linear grammar. I haven't read Lee et al, but the abstract only talks about grammars. The quote from Loff et al. is convincing, but it's not based on converting between parsers and matrix multipliers. $\endgroup$ – rici Nov 30 '19 at 20:07 $\begingroup$ So we know (via diagonalization) that there are CFLs without PEGs, but that doesn't identify any particular CFL, nor does it help us characterise a CFG whose language might be necessarily PEG-less. We do know that a super-linear language is not deterministic,. $\endgroup$ – rici Nov 30 '19 at 20:09 $\begingroup$ @rici If we can say that that there are CFLs without PEGs, that is sufficient. This was the open question (as you answered here -- second part). $\endgroup$ – rahul Dec 1 '19 at 13:19 I see two flaws in this proof sketch, one related to CFLs vs CFGs, and another related to nested quantifiers and running time as a function of multiple parameters. Any time you have a high-level proof strategy that seems to lead to surprising results, it is a good idea to check it carefully by expanding each step to obtain a detailed proof. Expand each claim with a precise statement, by applying the definition or the exact theorem in the literature, and verify carefully that they match up. This is particularly important when dealing with lower bounds, as they tend to introduce nested quantifiers that can lead your intuition astray when thinking only at a high level. Flaw #1: CFLs vs CFGs The proof seems to conflate context-free languages (CFLs) with context-free grammars (CFGs). However, there can be multiple CFGs that all generate the same CFL. At best, your proof strategy shows that there exists a CFG $G$ that can't be parsed by a PEG parser. But that's not surprising; we already know that PEG parsers can only parse CFGs that are in the PEG format. We cannot conclude anything about the corresponding CFL $L(G)$; for all we know, there might exist some other grammar $G'$ that is a PEG grammar and that yields the same language, i.e., $L(G)=L(G')$. Your proof doesn't rule that out, so it does not prove that the CFL it constructs can't be parsed by a PEG parser. A concrete example of this is given at https://en.wikipedia.org/wiki/Parsing_expression_grammar#Expressive_power, which shows a simple CFG that cannot be parsed by a PEG parser, but where there exists another CFG for the same language that can be parsed by a PEG parser. Flaw #2: Multiple parameters It's important to expand out the statement of what is meant by these lower bounds. The lower bound on matrix multiplication means that, for each matrix multiplication algorithm, there exists an infinite family $(A_1,B_1),(A_2,B_2),\cdots$ of matrices such that $A_n,B_n$ are $n\times n$ matrices, and multiplying $A_n \times B_n$ using this algorithm takes $\Omega(n^2)$ time. Lee's reduction describes how to construct a matrix multiplication algorithm from any context-free parser. If we now apply Lee's reduction to the matrix multiplication algorithm obtained from a PEG parser, we obtain an infinite family $(G_1,w_1),(G_2,w_2),\cdots$ of CFGs and inputs such that parsing them takes a long time. You'll need to dive into the details of Lee's reduction to determine the sizes of the $G_n,w_n$. Based on a quick look, it looks to me like the size of $G_n$ is $\Theta(n^2)$ and the size of $w_n$ is $\Theta(n^{1/3})$, but I'm not sure whether that's correct; you'd need to figure that out. Next, you'd need to figure out the running time of a PEG parser, as a function of both the size $g$ of the grammar and the size $n$ of the input string. Standard references state the running time of a packrat parser for a PEG grammar as $O(n)$, but they don't describe the dependence on $g$; is it $O(gn)$? $O(g^2n)$? something else? You'd need to figure that out, and then apply it to the family above, to determine what the asymptotic running time of this parser is on the family $L_n,w_n$, and thus what the running time of this matrix multiplication algorithm is on the family $A_n,B_n$, to determine whether it contradicts the $\Omega(n^2)$ lower bound. For instance, if the running time of a PEG parser is $O(gn)$, then Lee's reduction yields a matrix multiplication algorithm that takes $O(n^{2.333\ldots})$ time on the family $A_n,B_n$, which does not contradict the known lower bound. Notice how Lee's result does not provide a single context-free grammar or context-free language where parsing is slow; it provides an infinite family of pairs of languages and inputs (which was not considered in your proof strategy). Also note the importance of getting the nested quantifiers right, and of capturing how the running time of a parser depends on both the size of the input and on the size of the grammar (which was not considered in your proof strategy). Hopefully this highlights how a strategy that sounds good can run into difficulties when one tries to apply it in detail; and one must check those details before assuming the strategy will work out. D.W.♦D.W. $\begingroup$ "At best, your proof strategy shows that there exists a CFG G that can't be parsed by a PEG parser." -- I do not see how this follows from my reasoning? I do not say anything about a particular grammar. $\endgroup$ – rahul Dec 1 '19 at 13:33 $\begingroup$ For the Flaw 2, the point is that it does not describe the impact of the grammar right? I think that is a valid criticism. I will think it over, and understand the paper better, and get back. $\endgroup$ – rahul Dec 1 '19 at 13:41 $\begingroup$ @rahul, You don't say anything about a grammar -- your proof sketch talks about languages -- but that is because your proof sketch conflates the two. In other words, that is exactly the flaw. If you checked carefully what step 1 actually guarantees, it is at best something about grammars, not something about languages. In other words, step 2 is flawed where it claims "So there should exist a context free language"; the correct statement would be "So there should exist a context free grammar" (well, really an infinite family of grammars, but that's beside the point). $\endgroup$ – D.W.♦ Dec 1 '19 at 17:30 $\begingroup$ I'm not entirely sure what you mean by "impact of the grammar", so I don't know if that is the point I was trying to make with Flaw 2. Hopefully you'll be able to read what I wrote and figure out if it matches what you are thinking. $\endgroup$ – D.W.♦ Dec 1 '19 at 17:31 $\begingroup$ I have added an explanation on step 1. which connects grammars in step 1 to languages in step 2. Does my explanation make sense? $\endgroup$ – rahul Dec 2 '19 at 5:08 Not the answer you're looking for? Browse other questions tagged formal-languages context-free parsers or ask your own question. Lower Bound of Matrix Multiplication What is the computational power of Parsing Expression Grammars? Is there any nongeneral CFG parsing algorithm that recognises EPAL? How to convert a context free grammar (could generate regular language) to a right-linear grammar Convert PEG to BNF Are there any context-free languages that are not known to be in $\mathrm{DTIME}(O(n))$? Can CYK Parsing algorithm generate the parsing tree in O(n^3)?
CommonCrawl
SCC-DFTB Parameters for Simulating Hybrid Gold-Thiolates Compounds Journal of Computational Chemistry 36(27) DOI:10.1002/jcc.24046 Arnaud Fihey Paris Diderot University Christian Hettich Jérémy Touzeau This person is not on ResearchGate, or hasn't claimed this research yet. François Maurel Show all 8 authorsHide Request full-text PDF To read the full-text of this research, you can request a copy directly from the authors. We present a parametrization of a self-consistent charge density functional-based tight-binding scheme (SCC-DFTB) to describe gold-organic hybrid systems by adding new Au-X (X = Au, H, C, S, N, O) parameters to a previous set designed for organic molecules. With the aim of describing gold-thiolates systems within the DFTB framework, the resulting parameters are successively compared with density functional theory (DFT) data for the description of Au bulk, Aun gold clusters (n = 2, 4, 8, 20), and Aun SCH3 (n = 3 and 25) molecular-sized models. The geometrical, energetic, and electronic parameters obtained at the SCC-DFTB level for the small Au3 SCH3 gold-thiolate compound compare very well with DFT results, and prove that the different binding situations of the sulfur atom on gold are correctly described with the current parameters. For a larger gold-thiolate model, Au25 SCH3 , the electronic density of states and the potential energy surfaces resulting from the chemisorption of the molecule on the gold aggregate obtained with the new SCC-DFTB parameters are also in good agreement with DFT results. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc. No full-text available To read the full-text of this research, you can request a copy directly from the authors. ... 53 The auorg-1-1 parameter set, designed to describe optical excitations of organic molecules on gold nanoclusters, are employed for all computations in this work. 54 To obtain a correct desorption reaction coordinate, the DFTB parameters associated with the repulsive potential between ... ... To obtain the average absorption spectrum, vertical excitation energies and oscillator strengths of all snapshots are computed using the LR-TDDFTB. 54 For NAMD simulations, all excited state trajectories are prepared by plasmon excitation (i.e., 2.7 eV) according to the absorption spectrum and are propagated for 1 ps. The timesteps for nuclear (classical) and electronic (quantum) equations of motions are set to 0.4 fs and 0.1 fs, respectively. ... ... 2,60,61 The inset of Figure 1 shows the ground state equilibrium geometry of Au 20 −CO optimized by DFTB method. 54 This structure with CO molecule adsorbed at the apex site of the Au 20 cluster is consistent with previous findings. 62 To further confirm this adsorption configuration is favorable during the dynamic process, ground state potential energy surface (PES) along the adsorption reaction coordinate (Figure 1) is calculated. ... Molecular Dynamics Study of Plasmon-Mediated Chemical Transformations T. van der Heide Yu Zhang Xiaoyan Wu Thomas Frauenheim Heterogeneous catalysis of adsorbates on metallic surfaces mediated by plasmon has potential high photoelectric conversion efficiency and controllable reaction selectivity. Theoretical modeling of dynamical reaction processes provides in-depth analyses complementing experimental investigations. Especially for plasmon-mediated chemical transformations, light absorption, photoelectric conversion, electron-electron scattering, and electron-phonon coupling occur simultaneously at different timescales, rendering it very challenging to delineate the complex interplay of different factors. In this work, a trajectory surface hopping non-adiabatic molecular dynamics method is used to investigate the dynamics of plasmon excitation in an Au$_{20}$-CO system, including hot carrier generation, plasmon energy relaxation, and CO activation induced by electron-vibration coupling. The electronic properties indicate that when Au$_{20}$-CO is excited, a partial charge transfer takes place from Au$_{20}$ to CO. On the other hand, the dynamical simulations show that hot carriers generated after plasmon excitation transfer back and forth between Au$_{20}$ and CO. Meanwhile, the C-O stretching mode is activated due to the non-adiabatic couplings. The efficiency of plasmon-mediated transformation ($\sim$40\%) is obtained based on the ensemble average of these quantities. Our simulations provide important dynamical and atomistic insights into plasmon-mediated chemical transformation from the perspective of non-adiabatic simulations. ... [48][49][50] Based on a controlled approximation of density functional theory (DFT), this method has been successfully used to describe the electronic structures of large biological and organic systems, as well as their quantum properties. 50,51 Specifically, all of the DFTB calculations were performed using the DFTB+ code, 52 with the auorg set of parameters 53 in the second-order SCC-DFTB. The ground state structures of all systems were obtained with a geometry optimization with a force criterion of 10 −4 a.u, followed by a vibrational frequencies calculation to ensure that the structures correspond to a global energy minimum. ... ... The auorg set of parameters have been developed more specifically to describe gold bulk materials and surfaces or slabs decorated with organic molecules, 53 and has yet to be tested on GNCs. As a matter of fact, these GNCs possess a specific core-shell atomic arrangement, e.g., for Au 25 (SR) 18 − , the icosahedral cluster core encompasses 13 gold atoms and its outer part is built with 6 S-Au-S-Au-S staples. ... ... Indeed, as shown in a previous DFT study of this system, 11 the optical properties of such π-conjugated chromophores are more accurately reproduced when using a more sophisticated functional including a range-separation term, while a pure GGA functional returns bands which are too low in energy. It is then somewhat expected for the auorg set of parameters, created with a GGA type of functional, 53 to inherit such errors. Switching to a recent DFTB model to construct the parameters (not yet available for Au and S atoms) based on a range-separated type of functional, 61 may correct this behavior. ... Photoinduced Charge-Transfer in Chromophore-Labeled Gold Nanoclusters: Quantum Evidence of the Critical Role of Ligands and Vibronic Couplings Adrian Dominguez-Castro Carlos R. Lien-Medrano Khaoula Maghrebi The electron flow between a metallic aggregate and an organic molecule after excitation with light is a crucial step on which are based the hybrid photovoltaic nanomaterials. So far, designing such device with the help of theoretical approaches have been heavily limited by the computational cost of quantum dynamics models able to track the evolution of the excited states over time. In this contribution we present the first application of Time-Dependent Density Functional Tight-Binding (TD-DFTB) method for an experimental nanometer-sized gold-organic system consisting in a hexyl-protected Au25 cluster labelled with a pyrene fluorophore, in which the fluorescence quenching of the pyrene is attributed to an electron transfer from the metallic cluster to the dye. The full quantum rationalization of the electron transfer is attained through quantum dynamics simulations, highlighting the crucial role of the protecting ligands shell in the electron transfer, as well as the coupling with nuclear movement. This work paves the way towards a fast and accurate theoretical design of optoelectronic nanodevices. ... Therefore, many have turned their attention to the approximate quantum methods, such as Density Functional Based Tight-Binding (DFTB), to reduce the computational time and to extend the range of system sizes that are accessible. Fihey et al. 34 have recently developed parameters for gold-thiolates systems in the DFTB framework to accurately reproduce geometries, energies and electronic properties obtained at the DFT level. Oliveira et al. 35 have conducted a benchmark of the Au-Au parameters for a series of gold clusters against DFT. ... ... 62,63 The parameters used for the systems under study were obtained from previously reported publication. 34,36 The geometry optimization of the systems was done with the conjugate gradient algorithm without any constraint. Dispersion corrections were systematically introduced using the DFTB3 framework 64 along with hydrogen bond corrections, as proposed in previous publications. ... ... In the literature, there is currently one parameter set available for Au-Au, Au-O, and Au-H interactions, named as "auorg", which was benchmarked for the Au-Au interactions and Au interactions with organic molecules (O, S, H, C) in an aqueous environment. 34 Furthermore, water-water interaction can be described by different available sets. We tested water-water parameters from mio-1-1 set which was extensively used for water and solvated Titanium systems. ... Probing the structural properties of the water solvation shell around gold nanoparticles: A computational study Rika Tandiana émilie Brun Cécile Sicard-Roselli Carine Clavaguéra While subjected to radiation, gold nanoparticles (GNPs) have been shown to enhance the production of radicals when added to aqueous solutions. It has been proposed that the arrangement of water solvation layers near the water–gold interface plays a significant role. As such, the structural and electronic properties of the first water solvation layer surrounding GNPs of varying sizes were compared to bulk water using classical molecular dynamics and quantum and semi-empirical methods. Classical molecular dynamics was used to understand the change in macroscopic properties of bulk water in the presence of different sizes of GNP, as well as by including salt ions. The analysis of these macroscopic properties has led to the conclusion that larger GNPs induce the rearrangement of water molecules to form a 2D hydrogen-bond network at the interface. Quantum methods were employed to understand the electronic nature of the interaction between water molecules and GNPs along with the change in the water orientation and the vibrational density of states. The stretching region of vibrational density of states was found to extend into the higher wavenumber region, as the size of the GNP increases. This extension represents the dangling water molecules at the interface, as a result of reorientation of the water molecules in the first solvation shell. This multi-level study suggests that in the presence of GNP of increasing sizes, the first water solvation shell undergoes a rearrangement to maximize the water–water interactions as well as the water–GNP interactions. ... DFTB has been used to investigate various clusters including sodium [262], ceria [295], cadmium sulfides [233,264], bore [166], silver and gold [155,157,165,172,173,[267][268][269][270][271][272], ZnO [273], molybdenum disulfide [274], iron [154,275] or nanodiamond [276,277]. In addition to the necessary work dedicated to specific DFTB parametrization for these systems [155,156,172,173,[268][269][270]278], a number of studies have been devoted to their structural characterisation [63,153,154,157,161,165,268,278]. Figure 3 illustrates examples of investigated structures for silver cluster Ag 561 [172]. ... ... DFTB has been used to investigate various clusters including sodium [262], ceria [295], cadmium sulfides [233,264], bore [166], silver and gold [155,157,165,172,173,[267][268][269][270][271][272], ZnO [273], molybdenum disulfide [274], iron [154,275] or nanodiamond [276,277]. In addition to the necessary work dedicated to specific DFTB parametrization for these systems [155,156,172,173,[268][269][270]278], a number of studies have been devoted to their structural characterisation [63,153,154,157,161,165,268,278]. Figure 3 illustrates examples of investigated structures for silver cluster Ag 561 [172]. An interesting question is the evolution with size of the competition between ordered and disordered structures [157,165,173,272]. ... ... Then, they demonstrated its ability to accurately describe the low-energy structures of Au m (SMe) n species as well as qualitatively describe their electronic structure. A similar study was latter conducted by Fihey et al. who developed a new set of DFTB parameters for Au-X (X = Au, H, C, S, N, O) elements in order to better describe the interaction of thiolates and other molecules with gold particles [269]. Those parameters were validated by considering two species: Au 3 SCH 3 and Au 25 SCH 3 for which structural, energetic and electronic properties were calculated and compared to DFT results. ... Density-functional tight-binding: basic concepts and applications to molecules and clusters Fernand Spiegelman Nathalie Tarrat Jérôme Cuny Mathias Rapacioli The scope of this article is to present an overview of the Density Functional based Tight Binding (DFTB) method and its applications. The paper introduces the basics of DFTB and its standard formulation up to second order. It also addresses methodological developments such as third order expansion, inclusion of non-covalent interactions, schemes to solve the self-interaction error, implementation of long-range short-range separation, treatment of excited states via the time-dependent DFTB scheme, inclusion of DFTB in hybrid high-level/low level schemes (DFT/DFTB or DFTB/MM), fragment decomposition of large systems, large scale potential energy landscape exploration with molecular dynamics in ground or excited states, non-adiabatic dynamics. A number of applications are reviewed, focusing on -(i)- the variety of systems that have been studied such as small molecules, large molecules and biomolecules, bare or functionalized clusters, supported or embedded systems, and -(ii)- properties and processes, such as vibrational spectroscopy, collisions, fragmentation, thermodynamics or non-adiabatic dynamics. Finally outlines and perspectives are given. ... We first optimize the tetrahedral Au 20 structure in its ground state at the DFTB level with the auorg-1-1 parameter set. 62 This is The Journal of Chemical Physics ARTICLE scitation.org/journal/jcp followed by a 50 ps Born-Oppenheimer molecular dynamics with a classical time step Δt = 1 fs. ... ... To construct the average absorption spectrum, vertical excitation energies and oscillator strengths of all samples are computed using LR-TDDFTB. 62 For NAMD simulations, all excited state trajectories were prepared by plasmon excitation according to the absorption spectrum (see details in Appendix A) and propagated for 5 ps. The time step for nuclear and electronic motion is set to 1 and 0.25 fs, respectively. ... Investigation of plasmon relaxation mechanisms using nonadiabatic molecular dynamics Baopi Liu Hot carriers generated from the decay of plasmon excitation can be harvested to drive a wide range of physical or chemical processes. However, their generation efficiency is limited by the concomitant phonon-induced relaxation processes by which the energy in excited carriers is transformed into heat. However, simulations of dynamics of nanoscale clusters are challenging due to the computational complexity involved. Here, we adopt our newly developed Trajectory Surface Hopping (TSH) nonadiabatic molecular dynamics algorithm to simulate plasmon relaxation in Au 20 clusters, taking the atomistic details into account. The electronic properties are treated within the Linear Response Time-Dependent Tight-binding Density Functional Theory (LR-TDDFTB) framework. The relaxation of plasmon due to coupling to phonon modes in Au 20 beyond the Born–Oppenheimer approximation is described by the TSH algorithm. The numerically efficient LR-TDDFTB method allows us to address a dense manifold of excited states to ensure the inclusion of plasmon excitation. Starting from the photoexcited plasmon states in Au 20 cluster, we find that the time constant for relaxation from plasmon excited states to the lowest excited states is about 2.7 ps, mainly resulting from a stepwise decay process caused by low-frequency phonons of the Au 20 cluster. Furthermore, our simulations show that the lifetime of the phonon-induced plasmon dephasing process is ∼10.4 fs and that such a swift process can be attributed to the strong nonadiabatic effect in small clusters. Our simulations demonstrate a detailed description of the dynamic processes in nanoclusters, including plasmon excitation, hot carrier generation from plasmon excitation dephasing, and the subsequent phonon-induced relaxation process. ... Slater-Koster set [39,122,123] with orbital dependent Hubbard parameters. A periodic setup was used as well and the optimization was done at the Γ point. ... ... NEGF-DFTB was employed to compute the thermoelectric properties of OPE3 derivatives. We used the auorg-1-1 Slater-Koster set [39,122,123] with orbital dependent Hubbard parameters. A periodic setup was used, where the device is repeated perpendicular to the transport direction along the surface. ... A theoretical study of thermoelectric efficiency and cooling power in organic molecular junctions Fatemehsadat Tabatabaeikahangi Thermoelectricity is the conversion of heat to electricity and vice versa. As Seebeck discovered, a voltage applied to an electronic device generates a heat current, while a temperature difference can generate electricity. During the past decades, the size of consumer electronics has been continuously decreasing. The down-sizing of the electronic devices requires a more efficient heat management. An interesting route towards this goal is the idea of using single molecules as electronic components which gave rise to "molecular electronics".In fact, the usage of organic molecules in thermoelectric applications has at tracted a great deal of attention due to their flexibility, relatively low price and their eco-friendly nature. In this work, the thermoelectric properties of molecular junctions based on oligo(phenyleneethynylene) (OPE3) derivatives were studied. With the help of Density Functional Theory (DFT) calculations, models for the molecular junctions were constructed. The electronic transport properties were obtained using Non Equilibrium Green's Function-Density Functional based Tight-Binding (NEGF-DFTB). Firstly, the effect of side groups on the electronic conductance and thermopower of OPE3 derivatives was quantified. It is shown that these derivatives provide structural properties that are needed for highly efficient thermoelectric materials. Next,the effect of cross-linking molecules on the thermoelectric efficiency was investigated. Classical Molecular Dynamics (MD) was used to compute the phonon transport across the junctions. Combining the results from ab-initio and MD for electron and phonon transport, respectively, the thermoelectric efficiency in terms of the figure of merit ZT was computed for OPE3 derivatives. We have found that cross-linked molecules show a high ZT value, which makes them good candidates to be used as cooling systems. Finally, we introduce a circuit model that combines electron and phonon transport channels. This model allows to determine optimal parameter ranges in order to maximize cooling. Overall, our results demonstrate that the OPE3 derivatives display the necessary structural rigidity and compatible electronic structure to enable high performance devices for cooling applications. ... Among the vast literature focused on gold clusters , numerous works are devoted to Au 20 [63,64,66,73,74,80,88,99,107] because this cluster presents a double magic number: its atomic structure is a highly symmetrical pyramid and, in the simple spherical Jellium model, its 20-electron outer electronic shell is closed (1s 2 1p 6 1d 10 2s 2 superorbital configuration). In recent studies, employing a new adaptation of DFTB parameters [109,111,112], we have investigated the potential energy surface (PES) of Au (0,+,−) 20 by combining a Parallel Tempering Molecular Content courtesy of Springer Nature, terms of use apply. Rights reserved. ... ... The DFTB parameters used were adapted [112,116,123] from those developed by Fihey et al. [111] (the "auorg" set from the www.dftb.org website). ... Exploring energy landscapes at the DFTB quantum level using the threshold algorithm: the case of the anionic metal cluster Au$$_{20}^{-} THEOR CHEM ACC Johann Christian Schön We report the combination of the threshold algorithm with the Density Functional-based Tight Binding method allowing for the exploration of complex potential energy surfaces and the evaluation of probability flows between their regions, at the quantum level. This original scheme is used to explore the energy landscape of an anionic 20-atom gold cluster, Au20-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{20}^{-}$$\end{document}. On the basis of the relevant structures, 19 structural groups are highlighted, all of them being variations about the pyramidal shape: (1) distorted pyramids, (2) pyramids in which the atom of one of the facets has been removed, leaving a hole, and placed at different positions on the cluster and (3) pyramids on which an atom located at a vertex has been removed and placed on an edge or on a facet. Upper limits of the energies required to connect the basins of the 19 groups on the potential energy surface are evaluated. Moreover, the attractive basins are identified on the basis of the analysis of the probability flows on the landscape. The comparison of the disconnectivity tree with the results of the flux analysis provides a consistent representation of the Au20-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{20}^{-}$$\end{document} basins' proximity. Finally, we show how the new scheme allowed for the identification of counter-intuitive transition pathways. Graphical abstract ... 51 While the AO energies and the Hubbard parameters U are normally taken from DFT calculations of the free atom, the other electronic parameters and pairwise repulsive potentials are subject to optimization with the goal to reproduce certain desired properties; for instance, electronic band structure, atomization energies, reaction energies, and geometries (energy gradients or atomic forces). In order to parameterize the Au and P interactions for DFTB2, we adopted the Au electronic parameters from the "auorg" set published by Fihey et al. (referred to as auorg a ), 58 and a modied version of "auorg" by Oliveira et al. (referred to as auorg c ). 59 The difference between these two parameter sets lies in the Au 6p-orbital energy; in the auorg a set it was taken as the true PBE orbital energy, while in the auorg c set it was empirically shied upward by z+0.0279 hartree. The main purpose for this orbital energy shi was to obtain improved values for cohesive energies of pure gold nanoclusters with respect to PBE. 59 Following the work of "auorg", only 5d and 6 s valence electrons are considered, in total 11 valence electrons per Au atom. ... ... Structures Weights À65.01 1.0 [Au 6 (planar)] 2+ + PH 3 0 [Au 6 (planar)-PH 3 ] 2+ À64. 58 1.0 12 TELMUV 26 in the calculations generating the training set in order to avoid complications originating from a possible convolution of DFTB repulsive energy terms and the long-distance dispersion term. In order to benchmark the accuracy of the new parameters, ligand binding energies and optimized geometries were compared to their TPSS counterparts for various complexes of PH 3 , PMe 3 , PPh 3 and small-to moderate-sized gold clusters. ... Density-functional tight-binding for phosphine-stabilized nanoscale gold clusters Van-Quan Vuong Jenica Marie Lorica Madridejos Bálint Aradi Stephan Irle We report a parameterization of the second-order density-functional tight-binding (DFTB2) method for the quantum chemical simulation of phosphine-ligated nanoscale gold clusters, metalloids, and gold surfaces. Our parameterization extends the previously released DFTB2 "auorg" parameter set by connecting it to the electronic parameter of phosphorus in the "mio" parameter set. Although this connection could technically simply be accomplished by creating only the required additional Au-P repulsive potential, we found that the Au 6p and P 3d virtual atomic orbital energy levels exert a strong influence on the overall performance of the combined parameter set. Our optimized parameters are validated against density functional theory (DFT) geometries, ligand binding and cluster isomerization energies, ligand dissociation potential energy curves, and molecular orbital energies for relevant phosphine-ligated Au n clusters (n = 2-70), as well as selected experimental X-ray structures from the Cambridge Structural Database. In addition, we validate DFTB simulated far-IR spectra for several phosphine- and thiolate-ligated gold clusters against experimental and DFT spectra. The transferability of the parameter set is evaluated using DFT and DFTB potential energy surfaces resulting from the chemisorption of a PH3 molecule on the gold (111) surface. To demonstrate the potential of the DFTB method for quantum chemical simulations of metalloid gold clusters that are challenging for traditional DFT calculations, we report the predicted molecular geometry, electronic structure, ligand binding energy, and IR spectrum of Au108S24(PPh3)16. ... 386 atoms are included in the central region, and the remaining gold atoms are distributed to the electrodes in six layers each. The auorg-1-1 parameter set was used 109 . After the calculation of the transmission function in a non-SCC approximation and using the wide band approximation, the zero-bias conductance was evaluated at a Fermi energy of −5 eV by G = G 0 T (E F ). ... Learning Conductance: Gaussian Process Regression for Molecular Electronics Michael Deffner Marc Philipp Weise Haitao Zhang Carmen Herrmann Experimental studies of charge transport through single molecules often rely on break junction setups, where molecular junctions are repeatedly formed and broken while measuring the conductance, leading to a statistical distribution of conductance values. Modeling this experimental situation and the resulting conductance histograms is challenging for theoretical methods, as computations need to capture structural changes in experiments, including the statistics of junction formation and rupture. This type of extensive structural sampling implies that even when evaluating conductance from computationally efficient electronic structure methods, which typically are of reduced accuracy, the evaluation of conductance histograms is too expensive to be a routine task. Highly accurate quantum transport computations are only computationally feasible for a few selected conformations and thus necessarily ignore the rich conformational space probed in experiments. To overcome these limitations, we investigate the potential of machine learning for modeling conductance histograms, in particular by Gaussian process regression. We show that by selecting specific structural parameters as features, Gaussian process regression can be used to efficiently predict the zero-bias conductance from molecular structures, reducing the computational cost of simulating conductance histograms by an order of magnitude. This enables the efficient calculation of conductance histograms even on the basis of computationally expensive first-principles approaches by effectively reducing the number of necessary charge transport calculations, paving the way towards their routine evaluation. ... Recently, DFTB has been coupled with the vdW and MBD methods 29,30 to incorporate long-range dispersion, but unfortunately few reliable DFTB parametrizations for metal-organic interfaces exist to date. 31 Machine learning-based interatomic potentials (MLIPs) offer high computational efficiency whilst retaining the accuracy of the underlying training data based on electronic structure theory. Atomistic MLIP methods include Gaussian Approximation Potentials [32][33][34] or neural network (NN) potentials (e.g. ... Long-range dispersion-inclusive machine learning potentials for structure search and optimization of hybrid organic-inorganic interfaces Julia Westermayr Shayantan Chaudhuri Andreas Jeindl Reinhard J. Maurer The computational prediction of the structure and stability of hybrid organic-inorganic interfaces provides important insights into the measurable properties of electronic thin film devices, coatings, and catalyst surfaces and plays an important role in their rational design. However, the rich diversity of molecular configurations and the important role of long-range interactions in such systems make it difficult to use machine learning (ML) potentials to facilitate structure exploration that otherwise require computationally expensive electronic structure calculations. We present an ML approach that enables fast, yet accurate, structure optimizations by combining two different types of deep neural networks trained on high-level electronic structure data. The first model is a short-ranged interatomic ML potential trained on local energies and forces, while the second is an ML model of effective atomic volumes derived from atoms-in-molecules partitioning. The latter can be used to connect short-range potentials to well-established density-dependent long-range dispersion correction methods. For two systems, specifically gold nanoclusters on diamond (110) surfaces and organic $\pi$-conjugated molecules on silver (111) surfaces, we train models on sparse structure relaxation data from density functional theory and show the ability of the models to deliver highly efficient structure optimizations and semi-quantitative energy predictions of adsorption structures. ... partially based on the DFTB+ [41][42] software package. We also used the density functional based tight-binding method with auorg-1-1 parametrization [43][44] as implemented in the DFTB+ package. We considered a realistic atomistic system including the STM tip and the substrate, both connected to semi-infinite electrodes. ... On‐Surface Formation of Cyano‐Vinylene Linked Chains by Knoevenagel Condensation Xinliang Feng Francesca Moresco Kwan Ho Au-Yeung Tim Kühne The rapid development of on‐surface synthesis provides a unique approach toward the formation of carbon‐based nanostructures with designed properties. Herein, we present the on‐surface formation of CN‐substituted phenylene vinylene chains on the Au(111) surface, thermally induced by annealing the substrate stepwise at temperatures between 220°C and 240°C. The reaction is investigated by scanning tunneling microscopy and density functional theory. Supported by the calculated reaction pathway, we assign the observed chain formation to a Knoevenagel condensation between an aldehyde and a methylene nitrile substituent. ... We performed self-consistent density-functional-based tight-binding simulations for geometric and electronic structural properties as implemented in the program package DFTB+ 7 . The parameter set "auorg-1-1" has been utilized in all our calculations 34 , which is an extension of the "mio-1-1" 35 parameter set to include gold. The "mio-1-1" set has been developed for organic molecules including O, N, C, H, and S atoms and works well for conformational energies and geometries of H-bonded systems 36 Determining minimum configurations. ... Describing chain-like assembly of ethoxygroup-functionalized organic molecules on Au(111) using high-throughput simulations Lokamani Lokamani Jeffrey Kelling Robin Ohmann Sibylle Gemming Due to the low corrugation of the Au(111) surface, 1,4-bis(phenylethynyl)-2,5-bis(ethoxy)benzene (PEEB) molecules can form quasi interlocked lateral patterns, which are observed in scanning tunneling microscopy experiments at low temperatures. We demonstrate a multi-dimensional clustering approach to quantify the anisotropic pair-wise interaction of molecules and explain these patterns. We perform high-throughput calculations to evaluate an energy function, which incorporates the adsorption energy of single PEEB molecules on the metal surface and the intermolecular interaction energy of a pair of PEEB molecules. The analysis of the energy function reveals, that, depending on coverage density, specific types of pattern are preferred which can potentially be exploited to form one-dimensional molecular wires on Au(111). ... Implementations use Slater-Koster parametrisations, which provide the orbital coefficients and potentials. In this study, Slater-Koster files generated by Arnaud and Fihey et al for SCC-DFTB simulations of gold were used [18]. The classical energy generated by the LAMMPS simulation (E classical ) and the total Mermin energy from the DFTB+ energy calculation (E electronic ) were extracted during labelling. ... Enhancing Classical Gold Nanoparticle Simulations with Electronic Corrections and Machine Learning Ryan Stocks Amanda S Barnard Classical simulations of materials and nanoparticles have the advantage of speed and scalability but at the cost of precision and electronic properties, while electronic structure simulations have the advantage of accuracy and transferability but are typically limited to small and simple systems due to the increased computational complexity. Machine learning can be used to bridge this gap by providing correction terms that deliver electronic structure results based on classical simulations, to retain the best of both worlds. In this study we train an artificial neural network (ANN) as a general ansatz to predict a correction of the total energy of arbitrary gold nanoparticles based on general (material agnostic) features, and a limited set of structures simulated with an embedded atom potential and the self-consistent charge density functional tight binding (SCC-DFTB) method. We find that an accurate model with an overall precision of 14 eV or 8.6 % can be found using a diverse range of particles and a large number of manually generated features which were then reduced using automatic data-driven approach to reduce evaluation bias. We found the ANN reduces to a linear relationship if a suitable subset of important features are identified prior to training, and that the prediction can be improved by classifying the nanoparticles into kinetically limited and thermodynamically limited subsets based prior to training the ANN corrections. The results demonstrate the potential for machine learning to enhance classical molecular dynamics simulations without adding significant computational complexity, and provides methodology that could be used to predict other electronic properties which cannot be calculated solely using classical simulations. ... They are more thermally, and chemically resistant compared to the polyacetylene molecules studied as the linkers in our previous work 27 , which motivates us to study them here. Geometry optimization and calculations of band structures of two periodical chains, consisting of Au 309 nanoparticle and fragment of polypyrrole or polythiophene ( Fig. 4 and Fig. 6), respectively, were carried out by the self-consistent-charge density-functional tight-binding method (SCC DFTB) 40 with use of DFTB+ code (Version 19.1) 41 and parameters set that is appropriate for the description of the interaction between the atoms in the series of carbon, nitrogen, oxygen, hydrogen, sulfur and gold 40,42,43 . Although the simulations were performed for periodical systems in the three-dimensional space, the periodicity of these chains was considered along x-direction. ... Thermoelectric and Plasmonic Properties of Metal Nanoparticles Linked by Conductive Molecular Bridges Sergey Polyutov Aleksandr S. Fedorov Pavel Krasnov Maxim A. Visotin Thermoelectric and plasmonic properties of systems comprised of small golden nanoparticles (NP) linked by narrow conductive polymer bridges are studied using the original hybrid quantum‐classical model. The bridges considered here to be either conjugated polyacetylene, or polypyrrole, or polythiophene chain molecules terminated by thiol groups. The parameters required for the model were obtained using DFT and DFTB simulations. We found that charge‐transfer plasmons in the considered dumbbell structures possess the frequency in the infrared region for all considered molecular linkers. The appearance of plasmon vibrations and the existence of charge flow through the conductive molecule, with manifestation of quantum properties, were confirmed using frequency‐dependent polarizability calculations implemented in the Coupled Perturbed Kohn‐Sham method. To study the thermoelectric properties of the 1D periodical systems, we have derived a universal equation for the Seebeck coefficient. Phonon part of the thermal conductivity for the periodical −NP−S−C8H8‐ system was calculated by the classical molecular dynamics. The thermoelectric figure of merit ZT was calculated by considering the electrical quantum conductivity of the systems in the ballistic regime. It is shown that for Au309 nanoparticles connected by polyacetylene, polypyrrole, or polythiophene chains at T=300 K, ZT∼{0.08;0.45;0.40}, respectively. ... 41 DFTB is an approximate DFT method with an appealing cost/accuracy ratio and has been successfully used in a variety of applications in the field of molecular electronics. [42][43][44] Here we employed the auorg-0-1 Slater-Koster set 45,46 with orbital dependent Hubbard parameters and used a periodic setup, where the device is replicated perpendicular to the transport direction along the surface. This entails a solution of the Poisson equation under periodic boundary conditions to obtain the charge density in the device region. ... Electronic conductance and thermopower of single-molecule junctions of oligo(phenyleneethynylene) derivatives Nanoscale Hervé Dekkiche Andrea Gemma F. Tabatabaei Martin Bryce We report the synthesis and the single-molecule transport properties of three new oligo(phenyleneethynylene) (OPE3) derivatives possessing terminal dihydrobenzo[b]thiophene (DHBT) anchoring groups and various core substituents (phenylene, 2,5-dimethoxyphenylene and 9,10-anthracenyl). Their electronic conductance and their Seebeck coefficient have been determined using scanning tunneling microscopy-based break junction (STM-BJ) experiments between gold electrodes. The transport properties of the molecular junctions have been modelled using DFT-based computational methods which reveal a specific binding of the sulfur atom of the DHBT anchor to the electrodes. The experimentally determined Seebeck coefficient varies between -7.9 and -11.4 μV K-1 in the series and the negative sign is consistent with charge transport through the LUMO levels of the molecules. ... Similarly, description of different crystal phases with the same chemical composition but with very different coordination numbers can be challenging. Recent examples show, 21,22 however, that it is possible to reach a reasonable accuracy if special care is taken during the parameterization process. ... DFTB+, a software package for efficient approximate density functional theory based atomistic simulations Ben Hourahine Volker Blum DFTB+ is a versatile community developed open source software package offering fast and efficient methods for carrying out atomistic quantum mechanical simulations. By implementing various methods approximating density functional theory (DFT), such as the density functional based tight binding (DFTB) and the extended tight binding method, it enables simulations of large systems and long timescales with reasonable accuracy while being considerably faster for typical simulations than the respective ab initio methods. Based on the DFTB framework, it additionally offers approximated versions of various DFT extensions including hybrid functionals, time dependent formalism for treating excited systems, electron transport using non-equilibrium Green's functions, and many more. DFTB+ can be used as a user-friendly standalone application in addition to being embedded into other software packages as a library or acting as a calculation-server accessed by socket communication. We give an overview of the recently developed capabilities of the DFTB+ code, demonstrating with a few use case examples, discuss the strengths and weaknesses of the various features, and also discuss on-going developments and possible future perspectives. ... Geometry optimization end calculations of electronic properties of a family consisting of six icosahedron shaped similar gold nanoparticles consisting of 55, 147, 309, 561, 923, and 1415 atoms (see Fig. 5) were carried out by the DFTB method 50 with use of a parameter set that is appropriate for the description of bulk gold clusters and bulk material, as well as AunSCH 3 clusters. 51 A calculation of the band structure for the periodical structure -[Au 147 SC 8 H 8 S]-was also made. Furthermore, for using in (6) the total energies Etot of the isolated gold nanoparticles having different total charges, Q(e) ∈ {−2, −1, 0, 1, 2} were calculated (Table I). ... Charge-transfer plasmons with narrow conductive molecular bridges: A quantum- classical theory Hans Agren We analyze a new type of plasmon system arising from small metal nanoparticles linked by narrow conductive molecular bridges. In contrast to the well-known charge-transfer plasmons, the bridge in these systems consists only of a narrow conductive molecule or polymer in which the electrons move in a ballistic mode, showing quantum effects. The plasmonic system is studied by an original hybrid quantum-classical model accounting for the quantum effects, with the main parameters obtained from first-principles density functional theory simulations. We have derived a general analytical expression for the modified frequency of the plasmons and have shown that its frequency lies in the near-infrared (IR) region and strongly depends on the conductivity of the molecule, on the nanoparticle-molecule interface, and on the size of the system. As illustrated, we explored the plasmons in a system consisting of two small gold nanoparticles linked by a conjugated polyacetylene molecule terminated by sulfur atoms. It is argued that applications of this novel type of plasmon may have wide ramifications in the areas of chemical sensing and IR deep tissue imaging. ... The Au substrate was described as a slab with two dimensional PBC and theoretically optimized lattice parameter of 4.159 Å [31] was used. The Au substrate was optimized using DFTB platform employing DFTB.org parameter directory [32]. For the pEDA calculations without the thiol linker, the terminal hydrogen was attached to the gold with a bond length of 1.980 Å as shown in Scheme 1(A). ... Periodic Energy Decomposition Analysis for Electronic Transport Studies as a Tool for Atomic Scale Device Manufacturing IJEM Paven Thomas Mathew Fengzhou Fang Atomic scale manufacturing is a necessity of the future to develop atomic scale devices with high precision. A different perspective of the quantum realm, that includes the tunnelling effect, leakage current at the atomic-scale, Coulomb blockade and Kondo effect, is inevitable for the fabrication and hence, the mass production of these devices. For these atomic-scale device development, molecular level devices must be fabricated. Proper theoretical studies could be an aid towards the experimental realities. Electronic transport studies are the basis to realise and interpret the problems happening at this minute scale. Keeping these in mind, we present a periodic energy decomposition analysis (pEDA) of two potential candidates for moletronics: phthalocyanines and porphyrins, by placing them over gold substrate cleaved at the (111) plane to study the adsorption and interaction at the interface and then, to study their application as a channel between two electrodes, thereby, providing a link between pEDA and electronic transport studies. pEDA provides information regarding the bond strength and the contribution of electrostatic energy, Pauli's energy, orbital energy and the orbital interactions. Combining this analysis with electronic transport studies, can provide novel directions for atomic/close-to-atomic-scale manufacturing (ACSM). Literature survey shows that this is the first work which establishes a link between pEDA and electronic transport studies and a detailed pEDA study on the above stated molecules. The results show that among the molecules studied, porphyrins are more adsorbable over gold substrate and conducting across a molecular junction than phthalocyanines, even though, both molecules show a similarity in adsorption and conduction when a terminal thiol linker is attached. A further observation establishes the importance of attractive terms, which includes interaction, orbital and electrostatic energies, in correlating the pEDA study with the transport properties. By progressing this research, further developments could be possible in atomic-scale manufacturing in the future. Experimental studies of charge transport through single molecules often rely on break junction setups, where molecular junctions are repeatedly formed and broken while measuring the conductance, leading to a statistical distribution of conductance values. Modeling this experimental situation and the resulting conductance histograms is challenging for theoretical methods, as computations need to capture structural changes in experiments, including the statistics of junction formation and rupture. This type of extensive structural sampling implies that even when evaluating conductance from computationally efficient electronic structure methods, which typically are of reduced accuracy, the evaluation of conductance histograms is too expensive to be a routine task. Highly accurate quantum transport computations are only computationally feasible for a few selected conformations and thus necessarily ignore the rich conformational space probed in experiments. To overcome these limitations, we investigate the potential of machine learning for modeling conductance histograms, in particular by Gaussian process regression. We show that by selecting specific structural parameters as features, Gaussian process regression can be used to efficiently predict the zero-bias conductance from molecular structures, reducing the computational cost of simulating conductance histograms by an order of magnitude. This enables the efficient calculation of conductance histograms even on the basis of computationally expensive first-principles approaches by effectively reducing the number of necessary charge transport calculations, paving the way toward their routine evaluation. Ab-initio Simulations and Structure Fabrication at Atomic and Close-to-atomic Scale using Atomic Force Microscopy To increase the number of electronic components in a single integrated circuit chip, the functional feature size should be reduced to the atomic and close-to-atomic scale (ACS). For this, the application of molecules could be utilised as a channel for current conduction. This thesis focuses on the fundamental aspects of this theme to help us achieve atomic scale device fabrication in the future. A literature review on advances in moletronics and atomic and close-to-atomic scale manufacturing (ACSM) research with the application of atomic force microscopy (AFM) is given in chapter 1. ACS device manufacturing using molecules as the building block requires to overcome mainly three fundamental problems. Firstly the orientation of the molecule when placed between the electrodes plays a critical role in electronic transport. This is explained in chapter 2, which gives a detailed ab-initio simulation studies of current flow in inorganic molecule, such as polyoxometalates (POMs) and organic molecules such as phthalocyanines (Pc) and porphyrins (Pr), by incorporating them between gold electrodes. For the POM molecule, longitudinal orientation showed better conduction than lateral orientation, whereas for Pc and Pr molecules, the geometrically optimised orientation displayed better electronic transport properties than the tautomerized structure. Secondly, the bonding interaction between the electrode and the molecular terminal atoms helps us to determine the rate of electronic transport at the junction. Chapter 3 inspects this interaction through a periodic energy decomposition analysis on Pc and Pr derivatives. The attractive and repulsive energy terms of the bonding interactions proved that Pr molecules are better interactive over the gold substrate in comparison to Pc molecules. Electronic transport studies performed on their derivatives with and without thiol linkers further supported this result. Thus, a link between these two studies were established. This paves path for future work to select appropriate molecules and electrodes to demonstrate transistor actions for atomic scale device fabrication. Finally, the possibility of the fabrication of ACS electrodes with a single atomic protrusion for the attachment of molecules needs to be experimentally validated. As a first step towards this, fundamental studies using AFM to achieve atomic layer removal were carried out taking into account different machining parameters. This is given in chapter 4 and chapter 5. In chapter 4, mechanical AFM-based scratching techniques over gold and silicon using diamond tips were performed. In silicon substrate, material removal having a minimum depth of 3.2Å which is close to about 3 silicon atom thickness, has been achieved. On gold, a minimum depth of 9.7Å, close to 7 atom thickness has been achieved. In chapter 5, electrochemical AFM-based lithography over HOPG and silicon using platinum coated tips were carried out. Results showed that in bare silicon local anodic oxidation took place instead of material removal. Even in hydrofluoric (HF) treated silicon, oxidation occurred but in a controlled and well defined manner. From this, it can be deduced that HF treated silicon is better suited for structure fabrication than bare silicon. In the case of HOPG, different patterns such as nano-holes, nanolines and intrinsic patterns were machined and material removal close-to-a single atomic layer, ~3.35Å was achieved. Results from chapter 4 and 5 reveal that controlled AFM-based scratching techniques can ensure the fabrication of well-defined atomic structures for the application of molecular devices. Since ACSM represents the next phase of manufacturing, this thesis proposes some of the primary works required to realise ACSM using the currently available techniques and simulation methodologies to bring us one step closer in achieving considerable advancements in this field in the near future. In-depth theoretical understanding of the chemical interaction of aromatic compounds with a gold nanoparticle PHYS CHEM CHEM PHYS Nguyen-Thi Van-Oanh Gold Nanoparticles (GNPs), owing to their unique properties and versatile preparation strategy, have been demonstrated to exhibit promising applications in diverse fields, which include bio-sensors, catalysts, nanomedicines and radiotherapy. Yet, the nature of the interfacial interaction of GNPs with their chemical environment remains elusive. Experimental vibrational spectroscopy can reveal different interactions of aromatic biological molecules absorbed on GNPs, that may result from changes in the orientation of the molecule. However, the presence of multiple functional groups and the aqueous solvent introduces competition, and complexifies the spectral interpretations. Therefore, our objective is to theoretically investigate the adsorption of aromatic molecules containing various functional groups on the surface of GNPs to comparatively study their preferred adsorption modes. The interaction between Au32, as a model of GNPs, and a series of substituted aromatic compounds that includes benzene, aniline, phenol, toluene, benzoic acid, acetophenone, methyl benzoate, and thiophenol, is investigated. Our computed interaction energies highlight the preference of the aromatic ring to lie flat on the surface. The orientations of the molecules can be distinguished using infrared spectroscopy along with strong changes in intensity and significant shifts of some vibrational modes when the molecule interacts with the GNP. The interaction energy and the electron transfer between the nanoparticle and the aromatic molecule are not found to correlate, possibly because of significant back donation of electrons from GNPs to organic molecules as revealed by charge decomposition analysis. A thorough quantum topological analysis identifies multiple non-covalent interactions and assigns the nature of the interaction mostly to dative interactions between the aromatic ring and the GNP as well as dispersive interaction. Finally, energy decomposition analyses point out the role of the charge transfer energy contribution in the subtle balance of the different physical components. STM-induced ring closure of vinylheptafulvene molecular dipole switches on Au(111) Oumaima Aiboudi Dihydroazulene/vinylheptafulvene pairs are known as molecular dipole switches that undergo a ring-opening/-closure reaction by UV irradiation or thermal excitation. Herein, we show that the ring-closure reaction of a single vinylheptafulvene adsorbed on the Au(111) surface can be induced by voltage pulses from the tip of a scanning tunneling microscope. This cyclization is accompanied by the elimination of HCN, as confirmed by simulations. When inducing lateral movements by applying voltage pulses with the STM tip, we observe that the response of the single molecules changes with the ring closing reaction. This behaviour is discussed by comparing the dipole moment and the charge distribution of the open and closed forms on the surface. First-Principles Nonequilibrium Green's Function Approach to Energy Conversion in Nanoscale Optoelectronics Chiyung Yam Rulin Wang Hao Zou Understanding photon-electron conversion on the nanoscale is essential for future innovations in nano-optoelectronics. In this article, based on nonequilibrium Green's function (NEGF) formalism, we develop a quantum-mechanical method for modeling energy conversion in nanoscale optoelectronic devices. The method allows us to study photoinduced charge transport and electroluminescence processes in realistic devices. First, we investigate the electroluminescence properties of a two-level model with two different treatments of inelastic scatterings. We show the regime where self-consistency between electron and photon is important for correct description of the inelastic scatterings. The method is then applied to model single-molecule junctions based on the density-functional tight-binding approach. The predicted emission spectra are found to be in very good agreement with experimental measurements. For nanostructured materials, the method is further applied to study the photoresponse of a two-dimensional graphene/graphite-C3N4 heterojunction photovoltaic device. The simulations demonstrate clearly the impact of atomistic details on the optoelectronic properties of nanodevices. This work provides a practical theoretical framework that can be applied to model and design realistic nanodevices. A general tight-binding based energy decomposition analysis scheme for intermolecular interactions in large molecules J CHEM PHYS Yuan Xu Shu Zhang Erik Lindahl Peifeng Su In this work, a general tight-binding based energy decomposition analysis (EDA) scheme for intermolecular interactions is proposed. Different from the earlier version [Xu et al., J. Chem. Phys. 154, 194106 (2021)], the current tight-binding based density functional theory (DFTB)-EDA is capable of performing interaction analysis with all the self-consistent charge (SCC) type DFTB methods, including SCCDFTB2/3 and GFN1/2-xTB, despite their different formulas and parameterization schemes. In DFTB-EDA, the total interaction energy is divided into frozen, polarization, and dispersion terms. The performance of DFTB-EDA with SCC-DFTB2/3 and GFN1/2-xTB for various interaction systems is discussed and assessed Molecular electronic refrigeration against parallel phonon heat leakage channels Fatemeh Tabatabatai Samy Merabia Bernd Gotsmann Thomas A. Niehaus Due to their structured density of states, molecular junctions provide rich resources to filter and control the flow of electrons and phonons. Here we compute the out of equilibrium current-voltage characteristics and dissipated heat of some recently synthesized oligophenylenes (OPE3) using the Density Functional based Tight-Binding (DFTB) method within Non-Equilibrium Green's Function Theory (NEGF). We analyze the Peltier cooling power for these molecular junctions as function of a bias voltage and investigate the parameters that lead to optimal cooling performance. In order to quantify the attainable temperature reduction, an electro-thermal circuit model is presented, in which the key electronic and thermal transport parameters enter. Overall, our results demonstrate that the studied OPE3 devices are compatible with temperature reductions of several K. Based on the results, some strategies to enable high performance devices for cooling applications are briefly discussed. The computational prediction of the structure and stability of hybrid organic-inorganic interfaces provides important insights into the measurable properties of electronic thin film devices, coatings, and catalyst surfaces and plays an important role in their rational design. However, the rich diversity of molecular configurations and the important role of long-range interactions in such systems make it difficult to use machine learning (ML) potentials to facilitate structure exploration that otherwise requires computationally expensive electronic structure calculations. We present an ML approach that enables fast, yet accurate, structure optimizations by combining two different types of deep neural networks trained on high-level electronic structure data. The first model is a short-ranged interatomic ML potential trained on local energies and forces, while the second is an ML model of effective atomic volumes derived from atoms-in-molecules partitioning. The latter can be used to connect short-range potentials to well-established density-dependent long-range dispersion correction methods. For two systems, specifically gold nanoclusters on diamond (110) surfaces and organic π-conjugated molecules on silver (111) surfaces, we train models on sparse structure relaxation data from density functional theory and show the ability of the models to deliver highly efficient structure optimizations and semi-quantitative energy predictions of adsorption structures. Elaboration in silico of molecular systems for the fabrication of nano- and optoelectronic devices Vincent Delmas The computational studies developed in this thesis are divided in two research projects. The first part concerns a theoretical study of luminescent polynuclear copper (I) complexes. The second chapter report computational results that are compared to experimental data obtained by C. Lescop and collaborators (ISCR - INSA Rennes) for copper dimers bridged by three diphosphines. Different levels of calculations are tested in order to quantify the accuracy of the results. The following chapter brings together the study of related metallacyclic compounds comprising respectively 6 and 8 copper atoms. These studies show the importance of the geo-metric reorganization of excited states and intermolecular interactions at the solid state. The second part of this thesis aims at providing a theoretical support to the development of molecular devices presenting high thermoelectric performance. The first chapter reports on the state of the art and the factors influencing the variation of thermoelectric properties. Chapters II and III present the theoretical tools available for the study of the transmission properties of molecular junctions and their thermoelectric characteristics. The advantages of organometallic compounds in inducing thermoelectric properties are treated in chapters IV and V. A significant increase in conductance and See-beck coefficients is calculated. A computational molecular design implies the use of cheap computational methods. The density functional tight-binding methods are evaluated for this purpose in the last chapter. Doping Engineering of Single-Walled Carbon Nanotubes by Nitrogen Compounds Using Basicity and Alignment Bogumiła Elżbieta Kumanek Karolina Milowska Lukasz Przypis Dawid Janas Charge transport properties in single-walled carbon nanotubes (SWCNTs) can be significantly modified through doping, tuning their electrical and thermoelectric properties. In our study, we used more than 40 nitrogen-bearing compounds as dopants and determined their impact on the material's electrical conductivity. The application of nitrogen compounds of diverse structures and electronic configurations enabled us to determine how the dopant nature affects the SWCNTs. The results reveal that the impact of these dopants can often be anticipated by considering their Hammett's constants and pKa values. Furthermore, the empirical observations supported by first-principles calculations indicate that the doping level can be tuned not only by changing the type and the concentration of dopants but also by varying the orientation of nitrogen compounds around SWCNTs. Metadynamics molecular dynamics and isothermal Brownian-type molecular dynamics simulations for the chiralcluster Au 18 Chong Chiat Lim Albert S.K. Lai In an effort to gain insight into enantiomeric transitions, their transition mechanism, time span of transitions and distribution of time spans etc., we performed molecular dynamics (MD) simulations on chiral clusters Au10, Au15 and Au18, and found that viable reaction coordinates can be deduced from simulation data for enlightening the enantiomeric dynamics for Au10 and Au15, but not so for Au18. The failure in translating the Au18-L ⇌ Au18-R transitions by MD simulations has been chalked up to the thermal energy kBT at 300 K being much lower than energy barriers separating the enantiomers of Au18. Two simulation strategies were taken to resolve this simulation impediment. The first one uses the well-tempered metadynamics MD (MMD) simulation, and the second one adeptly applies first a somewhat crude MMD simulation to locate a highly symmetrical isomer Au18S and subsequently employed it as initial configuration in the MD simulation. In both strategies, we work in collective variable space of lower dimensionality. The well-tempered MMD simulation tactic was carried out aiming to offer a direct verification of Au18 enantiomers, while the tactic to conduct MMD/MD simulations in two consecutive simulation steps was intended to provide an indirect evidence of the existence of enantiomers of Au18 given that energy barriers separating them are much higher than ca. kBT at 300 K. This second tactic, in addition to confirming indirectly Au18-L and Au18-R starting from the symmetrical cluster Au18S, the simulation results shed light also on the mechanism akin to associative/nonassociative reaction transitions. ReaxFF Molecular Dynamics Simulations of Large Gold Nanocrystals Marie Fadigas Johannes Richardi DNA sequencing based on electronic tunneling in gold nanogap: a first-principles study Shizheng Wen Chi-Yung Yam Deoxyribonucleic acid (DNA) sequencing has found wide applications in medicine including treatment of diseases, diagnosis and genetics studies. Rapid and cost-effective DNA sequencing has been achieved by measuring the transverse electronic conductance as a single-stranded DNA is driven through a nanojunction. With the aim of improving the accuracy and sensitivity of DNA sequencing, we investigate the electron transport properties of DNA nucleobases within gold nanogaps based on first-principles quantum transport simulations. Considering the fact that the DNA bases can rotate within the nanogap during measurements, different nucleobase orientations and their corresponding residence time within the nanogap are explicitly taken into account based on their energetics. This allows us to obtain an average current that can be compared directly to experimental measurements. Our results indicate that bare gold electrodes show low distinguishability among the four DNA nucleobases while the distinguishability can be substantially enhanced with sulfur atom decorated electrodes. We further optimized the size of the nanogap by maximizing the residence time of the desired orientation. Dynamical evolution of the Schottky barrier as a determinant contribution to electron-hole pair stabilization and photocatalysis of plasmon-induced hot carriers Matias Berdakin German Jose Soldano Franco Bonafé Cristián G. Sánchez The harnessing of plasmon-induced hot carriers promises to open new avenues for the development of clean energies and chemical catalysis. The extraction of carriers before thermalization and recombination is of primordial importance to obtain appealing conversion yields. Here, hot carrier injection in the paradigmatic Au-TiO$_{2}$ system is studied by means of electronic and electron-ion dynamics. Our results show that pure electronic features (without considering many-body interactions or dissipation to the environment) contribute to the electron-hole separation stability. These results reveal the existence of a dynamic contribution to the interfacial potential barrier (Schottky barrier) that arises at the charge injection pace, impeding electronic back transfer. Furthermore, we show that this charge separation stabilization provides the time needed for the charge to leak to capping molecules placed over the TiO$_{2}$ surface triggering a coherent bond oscillation that will lead to a photocatalytic dissociation. We expect that our results will add new perspectives to the interpretation of the already detected long-lived hot carrier lifetimes, their catalytical effect, and concomitantly to their technological applications. Interplay of electron and phonon channels in the refrigeration through molecular junctions The harnessing of plasmon-induced hot carriers promises to open new avenues for the development of clean energies and chemical catalysis. The extraction of carriers before thermalization and recombination is of fundamental importance to obtain appealing conversion yields. Here, hot carrier injection in the paradigmatic Au-TiO2 system is studied by means of electronic and electron-ion dynamics. Our results show that pure electronic features (without considering many-body interactions or dissipation to the environment) contribute to the electron-hole separation stability. These results reveal the existence of a dynamic contribution to the interfacial potential barrier (Schottky barrier) that arises at the charge injection pace, impeding electronic back transfer. Furthermore, we show that this charge separation stabilization provides the time needed for the charge to leak to capping molecules placed over the TiO2 surface triggering a coherent bond oscillation that will lead to a photocatalytic dissociation. We expect that our results will add new perspectives to the interpretation of the already detected long-lived hot carrier lifetimes and their catalytical effect, and concomitantly to their technological applications. Nanomolecular Metallurgy: Transformation from Au 144 (SCH 2 CH 2 Ph) 60 to Au 279 (SPh- t Bu) 84 Kalpani Hirunika Wijesinghe Naga Arjun Sakthivel Luca Sementa Amala Dass Carrier-Envelope-Phase Modulated Currents in Scanning Tunneling Microscopy Ziyang Hu YanHo Kwok Guanhua Chen Shaul Mukamel Diels–Alder Reaction in a Molecular Junction Leopoldo Mejía Diego Garay-Ruiz Ignacio Franco A combined experimental and theoretical study of 1,4-bis(phenylethynyl)-2,5-bis(ethoxy)benzene adsorption on Au(111) SURF SCI The electronic and geometrical structure of 1,4-bis(phenylethynyl)-2,5-bis(ethoxy)benzene (PEEB) molecules adsorbed on a Au(111) surface is investigated by low temperature scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS) in conjunction with density-functional-based tight-binding (DFTB) simulations of the density of states and the interaction with the substrate. Our density functional theory calculations indicate that the PEEB molecule is physisorbed on the Au(111) substrate, with negligible distortion of the molecular geometry and charge transfer between molecule and substrate. Neutral gold clusters studied by the isothermal Brownian‐type molecular dynamics and metadynamics molecular dynamics simulations The DFTB theory was combined with the isothermal Brownian‐type molecular dynamics (MD) and metadynamics molecular dynamics (MMD) algorithms to perform simulation studies for Au clusters. Two representative DFTB parametrizations were investigated. In one parametrization, the DFTB‐A, the Slater–Koster parameters in the DFTB energy function were determined focusing on the ionic repulsive energy part, Erep and the other, the DFTB‐B, due attention was paid to the electronic band‐structure energy part, Eband. Minimized structures of these two parametrizations were separately applied in MD and MMD simulations to generate unbiased and biased trajectories in collective variable (CV) space, respectively. Here, we found the MD simulations monitored at 300 K manifest fluxional characteristics in planar cluster Au9/DFTB‐A, but give no discernible tracts of fluxionality for planar Au8/DFTB‐A and Au8/DFTB‐B, for nonplanar Au10/DFTB‐A and, to some extent, for nonplanar Au9/DFTB‐B; they are plausibly being hindered by higher‐than kBT energy barriers. Very recent FIR‐MPD spectroscopy measurements, however, were reported to have detected at 300 K both the planar and nonplanar neutral Aun clusters in the size range 5 ≤ n ≤ 13. The failure of MD simulations has prompted us to apply the MMD simulation and construct the free energy landscape (FEL) in CV space. Through scrutinizing the FELs of these clusters and their associated structures, we examine the relative importance of Erep/DFTB‐A and Eband/DFTB‐B in unraveling the covalent‐like behavior of valence electrons in Aun. Most important of all, we shall evaluate the DFTB parametrization in MMD strategy through comparing extensively the simulation data recorded with the gas‐phase experimental data. STM induced manipulation of azulene-based molecules and nanostructures: the role of the dipole moment Frank Eisenhut Among the different mechanisms that can be used to drive a molecule on a surface by the tip of a scanning tunneling microscope at low temperature, we used voltage pulses to move azulene-based single molecules and nanostructures on Au(111). Upon evaporation, the molecules partially cleave and form metallo-organic dimers while single molecules are very scarce, as confirmed by simulations. By applying voltage pulses to the different structures under similar conditions, we observe that only one type of dimer can be controllably driven on the surface, which has the lowest dipole moment of all investigated structures. Experiments under different bias and tip height conditions reveal that the electric field is the main driving force of the directed motion. We discuss the different observed structures and their movement properties with respect to their dipole moment and charge distribution on the surface. Molecular rotors with designed polar rotating groups possess mechanics-controllable wide-range rotational speed Jian Shao Wenpeng Zhu Zhang Xiaoyue Yue Zheng Molecular rotors with controllable functions are promising for molecular machines and electronic devices. Especially, fast rotation in molecular rotor enables switchable molecular conformations and charge transport states for electronic applications. However, the key to molecular rotor-based electronic devices comes down to a trade-off between fast rotational speed and thermal stability. Fast rotation in molecular rotor requires a small energy barrier height, which disables its controllability under thermal excitation at room temperature. To overcome this trade-off dilemma, we design molecular rotors with co-axial polar rotating groups to achieve wide-range mechanically controllable rotational speed. The interplay between polar rotating groups and directional mechanical load enables a "stop-go" system with a wide-range rotational energy barrier. We show through density functional calculations that directional mechanical load can modulate the rotational speed of designed molecular rotors. At a temperature of 300 K, these molecular rotors operate at low rotational speed in native state and accelerates tremendously (up to 10 ¹⁹ ) under mechanical load. Nanoparticules protégées d'Or et d'Argent : de la compréhension des interactions métal-ligand à l'échelle quantique vers la modélisation atomistique Clément Dulong L'interface M-thiol(ate) composant la surface de nanoparticules cristallines d'Or ou d'Argent a été étudiée à l'aide d'outils de la chimie quantique (DFT,QTAIM,ELF,NBO) à deux échelles différentes : d'un agrégat pyramidal M20 (modèle des bords et de défauts des nanoparticules), et de surfaces périodiques (111) et (100), en interaction avec un ligand méthyle thiol ou un radical méthyle thiolate. Après la mise en place d'une procédure permettant pour la première fois une analyse topologique quantitative fiable sur des systèmes périodiques traités en ondes planes, il a été montré que : MeSH se physisorbe systématiquement en position « Top », une liaison dative se formant entre le soufre et un atome métallique, accompagnée d'un mécanisme de transfert de charge complexe ; MeS se chimisorbe principalement en position pontée, une compétition se produisant entre la récupération d'un électron conduisant à MeS- et la formation de 2 liaisons datives accompagnées d'un transfert de charge S->M ; les liaisons Au-S sont toujours plus fortes que les liaisons Ag-S, induit par les effets relativistes de l'Or. Un champ de force réactif Ag-thiolate a été optimisé par une méthode d'apprentissage supervisé, et reproduit correctement les sites et énergies d'adsorption. Il a permis la comparaison avec des modèles existants Au-thiolate et d'envisager des simulations de nanoparticules complètes. Enantiomeric Transitions in the Chiral Cluster Au15 Studied by a Reaction Coordinate Deduced from Molecular Dynamics Simulations J PHYS CHEM A A recently developed modified basin hopping (MBH) optimization algorithm, combined with an energy function calculated by the semiempirical density functional tight-binding (DFTB) theory, was applied to determine the lowest-energy structures of Au n clusters with size n = 3-20. It was predicted from the DFTB/MBH optimization algorithm calculations that clusters Au10, Au15, and Au18 exhibit chiral properties; i.e., each of these three clusters possesses the same energy value and associated with it are two nonsuperposable mirror-image clusters. In the potential energy landscape, there thus exist multidimensional barriers separating the two enantiomers, and this lowest-energy double-well morphology is surrounded by potential-energy minima of higher energies. In this paper, we have chosen to study the chiral cluster Au15 by employing an isothermal Brownian-type molecular dynamics simulation to discern in greater detail its conformational transition from one enantiomer, say left, to its right counterpart. To facilitate our analysis of the simulation data, we transpose the multidimensional configurational space description to a lower dimensional collective variable (CV) space spanned by two geometry-relevant CVs. The thermally driven progression and mechanism of enantiomeric transitions between the left and right enantiomers will be our main focus, and the strategy is to dissect the time development of the CVs collected from different sets of independent simulation runs. From simulation data, we found that an understanding of the dynamics of enantiomeric transitions needs first to seek out the left and right enantiomers through a molecular modeling and visualizing program, then to ferret out and identify between the left and right enantiomers a symmetrical structure, and finally to define from the latter a reaction coordinate. We showed in this work that this single reaction coordinate is predictive in unraveling the left ⇌ right enantiomeric transition events, providing a specific inkling of the transition time span and its associated distribution which can be checked further for its reasonableness by the autocorrelation function and a vibrational analysis, all of which shed light on the mechanisms of transition. Optimization of a New Reactive Force Field for Silver-Based Materials Bruno Madebène Susanna Monti A new reactive force field based on the ReaxFF formalism is effectively parametrized against an extended training set of quantum chemistry data (containing more than 120 different structures) to describe accurately silver- and silver-thiolate systems. The results obtained with this novel representation demonstrate that the novel ReaxFF paradigm is a powerful methodology to reproduce more appropriately average geometric and energetic properties of metal clusters and slabs when compared to the earlier ReaxFF parametrizations dealing with silver and gold. ReaxFF cannot describe adequately specific geometrical features such as the observed shorter distances between the under-coordinated atoms at the cluster edges. Geometric and energetic properties of thiolates adsorbed on a silver Ag20 pyramid are correctly represented by the new ReaxFF and compared with results for gold. The simulation of self-assembled monolayers of thiolates on a silver (111) surface does not indicate the formation of staples in contrast to the results for gold-thiolate systems. Observation and Analysis of Incoherent Second-Harmonic Generation in Gold Nanoclusters with Six Atoms Renato Barbosa- Silva Manoel L. Silva Neto Dipankar Bain Cid de Araujo We report the first-hyperpolarizability, β(-2ω; ω,ω), of gold nanoclusters Au6(GSH)2(MPA)2 having six gold atoms capped by 3-mercaptopropionic acid (MPA) and glutathione (GSH) that is unprecedented. Here, we used the concentration of 2.1×10¹⁶ nanoclusters/mL to determine β(-2ω; ω,ω) applying the hyper-Rayleigh scattering technique by using a 1064 nm laser and analyzing the scattered light at 532 nm. The measured hyperpolarizability is found to be β(-2ω; ω,ω) =760×10⁻³⁰ esu which corresponds to ≈127× 10⁻³⁰ esu per gold atom. The static hyperpolarizability, β(0) = 52×10⁻³⁰ esu per gold atom, was determined by using a two-level model approximation. The large β(-2ω; ω,ω) is attributed to quantum confinement effect and the geometry of the nanoclusters that have no inversion symmetry. Preliminary computer calculations based on the DFT method were performed and the numerical results present the same order of magnitude than the experimental values. Accuracy of the PM6 and PM7 Methods on Bare and Thiolate-Protected Gold Nanoclusters Joani Mato Emilie B Guidez Semiempirical quantum mechanical (SEQM) methods offer an attractive middle ground between fully ab initio quantum chemistry and force-field simulations, allowing for a quantum mechanical treatment of the system at a relatively low computational cost. However, SEQM methods have not been frequently utilized in the study of transition metal systems, mostly due to the difficulty in obtaining reliable parameters. This paper examines the accuracy of the PM6 and PM7 semiempirical methods to predict geometries, ionization potentials, and HOMO-LUMO energy gaps of several bare gold clusters (Au n ) and thiolate-protected gold nanoclusters (AuSNCs). Contrary to PM6, the PM7 method can predict qualitatively correct geometries and ionization potentials when compared to DFT. PM6 fails to predict the characteristic gold core and gold-sulfur ligand shell (staple motifs) of the AuSNC structures. Both the PM6 and PM7 methods overestimate the HOMO-LUMO gaps. Overall, PM7 provides a more accurate description of bare gold and gold-thiolate nanoclusters than PM6. Nevertheless, refining the gold parameters could help achieve better quantitative accuracy. Controlling the Emission Frequency of Graphene Nanoribbon Emitters Based on Spatially Excited Topological Boundary States Na Liu Graphene nanoribbons (GNRs) with atomically precise heterojunction interfaces are exploited as nanoscale light emitting devices with modulable emission frequencies. By connecting GNRs with different widths and lengths, topological boundary states can be formed and manipulated. Using first-principles-based atomistic simulations, we studied the luminescence properties of a STM GNR junction and explore the applications of these topological states as nanoscale light sources. Taking advantage of the ultrahigh resolution of STM tip, direct injection of high energy carriers at selected boundary states can be achieved. In this way, emission color can be controlled by precisely changing the tip position. The GNR heterojunction can therefore represent a robust and controllable light-emitting device that takes a step forward towards the fabrication of nanoscale graphene-based optoelectronic devices. Isomeric Thiolate Monolayer Protected Au 92 and Au 102 Nanomolecules Bokwon Yoon We report the results of a study of the isomeric thiolate monolayer capping of two gold nanocluster molecules, namely Au92 and Au102 , both protected by 44 4-tert-butylbenzene thiolate (TBBT) ligands. The finding of an isomeric monolayer of the same ligand in a series of metal nanocluster molecules in this large size range, is unprecedented. Au92 and Au102 possess entirely different structures and properties. The Au92 has an 84 atom face centered cubic (FCC) core whereas Au102 has a 79 atom Marks-decahedral core. Nevertheless, despite the metal core structural diversity and the complexities of the interfacial staples, both of these have the same number of ligands. The Au92 core is protected by 28 bridging ligands and 8 monomeric staple motifs whereas Au102 is protected by 19 monomeric and 2 dimeric staple motifs. The Au92 and Au102 cores have cuboidal and globular structures, respectively. As a result, Au92 has longer {100} facets and exhibits c(2x2) monolayer arrangement for bridging ligands similar to what has been observed on {100} facets of bulk gold, whereas Au102 has only staple motifs. We prepared the Au102 in TBBT series using a ligand-exchange-based approach and characterized them by mass spectrometry and UV-Vis spectroscopy. Mass spectrometry revealed that the compound has a mixture of isoelectronic species with the formula of Au102(TBBT)44, Au103(TBBT)45, and Au104(TBBT)46. Concurrent first-principles electronic structure computational studies provide insights into the stability and nature of these two isomeric-monolayer capped gold nanomolecules. Interplay Between Intra and Inter-Band Transitions Associated With Plasmon-Induced Hot Carriers Generation Process in Silver and Gold Nanoclusters Oscar A. Douglas-Gallardo In the last decades, theoretical and experimental studies of nanostructured materials have gathered the efforts of a big slice of the scientific community. Light-nanostructure interaction has been a preponderant research topic fueled by the interest in the plasmonic properties of metallic nanostructures. More recently, the study of plasmon-induced hot carrier generation has drawn the attention of scientists because of their potential application in optoelectronics, photovoltaics and photocatalysis. In this contribution, we study the real-time electronic dynamics associated with the generation of hot carriers in silver and gold nanoparticles focusing on its energy distribution and atomic shell population/depopulation dynamics. Revisiting our previous results from the perspective of a generalized 2d correlation analysis paves the way to disentangle complex dynamic outcomes, like the dissipation of the sp-band energy absorbed during plasmonic excitation. We show that this mechanism is founded in the dynamic cross-correlation between sp-band and d-band electronic populations. Geometry optimization in the Zero Order Regular Approximation for relativistic effects Erik van Lenthe Andreas W. Ehlers Evert Jan Baerends Analytical expressions are derived for the evaluation of energy gradients in the zeroth order regular approximation (ZORA) to the Dirac equation. The electrostatic shift approximation is used to avoid gauge dependence problems. Comparison is made to the quasirelativistic Pauli method, the limitations of which are highlighted. The structures and first metal-carbonyl bond dissociation energies for the transition metal complexes W(CO)6, Os(CO)5, and Pt(CO)4 are calculated, and basis set effects are investigated. © 1999 American Institute of Physics. Hydrogen bonding and stacking interactions of nucleic acid base pairs: A density-functional-theory based treatment Pavel Hobza Efthimios Kaxiras We extend an approximate density functional theory DFT method for the description of long-range dispersive interactions which are normally neglected by construction, irrespective of the correlation function applied. An empirical formula, consisting of an R 6 term is introduced, which is appropriately damped for short distances; the corresponding C 6 coefficient, which is calculated from experimental atomic polarizabilities, can be consistently added to the total energy expression of the method. We apply this approximate DFT plus dispersion energy method to describe the hydrogen bonding and stacking interactions of nucleic acid base pairs. Comparison to MP2/6-31G*0.25 results shows that the method is capable of reproducing hydrogen bonding as well as the vertical and twist dependence of the interaction energy very accurately. © 2001 American Institute of Physics. Efficient Iterative Schemes for Ab Initio Total-Energy Calculations Using a Plane-Wave Basis Set Phys Rev B G. G. Kresse Jürgen Furthmüller We present an efficient scheme for calculating the Kohn-Sham ground state of metallic systems using pseudopotentials and a plane-wave basis set. In the first part the application of Pulay's DIIS method (direct inversion in the iterative subspace) to the iterative diagonalization of large matrices will be discussed. Our approach is stable, reliable, and minimizes the number of order N-atoms(3) operations. In the second part, we will discuss an efficient mixing scheme also based on Pulay's scheme. A special ''metric'' and a special ''preconditioning'' optimized for a plane-wave basis set will be introduced. Scaling of the method will be discussed in detail for non-self-consistent calculations. It will be shown that the number of iterations required to obtain a specific precision is almost independent of the system size. Altogether an order N-atoms(2) scaling is found for systems up to 100 electrons. If we take into account that the number of k points can be implemented these algorithms within a powerful package called VASP (Vienna ab initio simulation package). The program and the techniques have been used successfully for a large number of different systems (liquid and amorphous semiconductors, liquid simple and transition metals, metallic and semiconducting surfaces, phonons in simple metals, transition metals, and semiconductors) and turned out to be very reliable. The Chemistry of the Sulfur-Gold Interface: In Search of a Unified Model ACCOUNTS CHEM RES Evangelina Pensa Emiliano Cortés Gastón Corthey Roberto C Salvarezza Over the last three decades, self-assembled molecular films on solid surfaces have attracted widespread interest as an intellectual and technological challenge to chemists, physicists, materials scientists, and biologists. A variety of technological applications of nanotechnology rely on the possibility of controlling topological, chemical, and functional features at the molecular level. Self-assembled monolayers (SAMs) composed of chemisorbed species represent fundamental building blocks for creating complex structures by a bottom-up approach. These materials take advantage of the flexibility of organic and supramolecular chemistry to generate synthetic surfaces with well-defined chemical and physical properties. These films already serve as structural or functional parts of sensors, biosensors, drug-delivery systems, molecular electronic devices, protecting capping for nanostructures, and coatings for corrosion protection and tribological applications. From Ultrasoft Pseudopotentials to the Projector Augmented-Wave Method G. J. Kresse Daniel Joubert The formal relationship between ultrasoft (US) Vanderbilt-type pseudopotentials and Blöchl's projector augmented wave (PAW) method is derived. It is shown that the total energy functional for US pseudopotentials can be obtained by linearization of two terms in a slightly modified PAW total energy functional. The Hamilton operator, the forces, and the stress tensor are derived for this modified PAW functional. A simple way to implement the PAW method in existing plane-wave codes supporting US pseudopotentials is pointed out. In addition, critical tests are presented to compare the accuracy and efficiency of the PAW and the US pseudopotential method with relaxed core all electron methods. These tests include small molecules (H2, H2O, Li2, N2, F2, BF3, SiF4) and several bulk systems (diamond, Si, V, Li, Ca, CaF2, Fe, Co, Ni). Particular attention is paid to the bulk properties and magnetic energies of Fe, Co, and Ni. Tight-binding approach to time-dependent density-functional response theory Sándor Suhai Fabio Della Sala Th. Frauenheim In this paper we propose an extension of the self-consistent charge-density-functional tight-binding (SCC-DFTB) method [M. Elstner et al., Phys. Rev. B 58, 7260 (1998)], which allows the calculation of the optical properties of finite systems within time-dependent density-functional response theory (TD-DFRT). For a test set of small organic molecules low-lying singlet excitation energies are computed in good agreement with first-principles and experimental results. The overall computational cost of this parameter-free method is very low and thus it allows us to examine large systems: we report successful applications to C60 and the polyacene series. Clusters of transition-metal atoms Michael D Morse Towards an order- Célia Fonseca Guerra J. G. Snijders G. te Velde Accurate Coulomb Potentials for Periodic and Molecular Systems through Density Fitting J CHEM THEORY COMPUT Mirko Franchini Pierre Herman Theodoor Philipsen Lucas Visscher We present a systematically improvable density fitting scheme designed for accurate Coulomb potential evaluation of periodic and molecular systems. The method does not depend on the way the density is calculated, allowing for a basis set expansion as well as a numerical representations of the orbitals. The scheme is characterized by a partitioning of the density into local contributions that are expanded by means of cubic splines. For three-dimensional periodic systems, the long-range contribution to the Coulomb potential is treated with the usual reciprocal space representation of the multipole moments, while in one- and two-dimensional systems, it is calculated via a new algorithm based on topological extrapolation. The efficiency and numerical robustness of the scheme is assessed for a number of periodic and nonperiodic systems within the framework of density-functional theory. Computational spectroscopy of large systems in solution: the DFTB/PCM and TD-DFTB/PCM approach Vincenzo Barone Ivan Carnimeo Giovanni Scalmani The Density Functional Tight Binding (DFTB) and Time Dependent DFTB (TD-DFTB) methods have been coupled with the Polarizable Continuum Model (PCM) of solvation, aiming to study spectroscopic properties for large systems in condensed phases. The calculation of the ground and the excited state energies, together with the analytical gradient and Hessian of the ground state energy, have been implemented in a fully analytical and computationally effective approach. After sketching the theoretical background of both DFTB and PCM, we describe the details of both the formalism and the implementation. We report a number of examples ranging from vibrational to electronic spectroscopy, and we identify the strengths and the limitations of the DFTB/PCM method. We also evaluate DFTB as a component in a hybrid approach, together with a more refined quantum mechanical (QM) method and PCM, for the specific case of anharmonic vibrational spectra. The Role of Hydrogen on the Adsorption Behavior of Carboxylic Acid on TiO2 Surfaces Wolfgang Heckel Beatrix A. M. Elsner In this work, we present binding energies of acetic acid on the (110), (100), and (011) surfaces of rutile TiO2 calculated with the two density functional theory (DFT) exchange-correlation functionals PBE and PBEsol. It is shown that the binding energies can be influenced, in this case slightly reduced for all three surfaces, via preadsorption of hydrogen. Additionally, we tested the performance of the density-functional based tight-binding (DFTB) method applied to these adsorbate systems. Analysis of the electronic density of states shows that DFTB provides qualitatively comparable results to DFT calculations as long as the Fermi energy level remains within the band gap. Density functional based calculations for Fen (n⩽32) CHEM PHYS Christof Köhler Gotthard Seifert We investigate magnetic and structural properties of iron clusters up to Fe 32, well extending into the size range accessible by experiment. A density-functional based tight-binding scheme fully incorporating the effects of spin polarisation and charge transfer in a self-consistent manner has been used. The potential hypersurfaces have been scanned by an unconstrained search using a genetic algorithm. Results for smaller clusters up to Fe 17 are validated against more sophisticated density functional theory calculations. Our magnetic moment data show a strong change around Fe 13 being unique in this size range. For the larger cluster sizes a smooth decrease of the clusters average spin magnetic moments is found in good agreement with experimental data. ChemInform Abstract: Density Functional Tight Binding: Application to Organic and Biological Molecules Michael Gaus Qiang Cui In this work, we review recent extensions of the density functional tight binding (DFTB) methodology and its application to organic and biological molecules. DFTB denotes a class of computational models derived from density functional theory (DFT) using a Taylor expansion around a reference density. The first- and second-order models, DFTB1 and DFTB2, have been reviewed recently (WIREs Comput Mol Sci 2012, 2:456–465). Here, we discuss the extension to third order, DFTB3, which in combination with a modification of the Coulomb interactions in the second-order formalism and a new parametrization scheme leads to a significant improvement of the overall performance. The performance of DFTB2 and DFTB3 for organic and biological molecules are discussed in detail, as well as problems and limitations of the underlying approximations. WIREs Comput Mol Sci 2014, 4:49–61. doi: 10.1002/wcms.1156 The authors have declared no conflicts of interest in relation to this article. For further resources related to this article, please visit the WIREs website. Adsorption of multivalent alkylthiols on Au(111) surface: Insights from DFT J COMPUT CHEM Edoardo Fertitta Elena Voloshina Beate Paulus The adsorption of multivalent thiols on gold (111) surface was investigated using density functional theory applying the Perdew-Burke-Ernzerhof functional. Through the comparison of differences in energetics, structure and charge density distribution of a set of monodentate and polydentate thiols, we have described in detail the factors affecting the adsorption energy and the role played by the multivalence, which causes a decreasing of adsorption energy because of both electronic and steric hindrance effects. Finally, the comparison between the adsorption of 1,2- and 1,3-disulfides revealed how the chain length may affect the cleavage of the SS bond when they adsorb on Au(111) surface. © 2013 Wiley Periodicals, Inc. Generalized Gradient Approximation Made Simple [Phys. Rev. Lett. 77, 3865 (1996)] John P. Perdew Kieron Burke Matthias Ernzerhof Structures of small gold cluster cations (Aun+, n<14): Ion mobility measurements versus density functional calculations Stefan Gilb Patrick J Weis Filip Furche Manfred M. Kappes We have performed ion mobility measurements on gold cluster cations Aun+ generated by pulsed laser vaporization. For clusters with n<14, experimental cross sections are compared with theoretical results from density functional calculations. This comparison allows structural assignment. We find that room temperature gold cluster cations have planar structures for n=3–7. Starting at n=8 they form three dimensional structures with (slightly distorted) fragments of the bulk phase structure being observed for n=8–10. Electronic structure analysis of small gold clusters Au m (m ≤ 16) by density functional theory Giuseppe Zanti Daniel Peeters Small gold clusters Aum (m ≤ 16) were analyzed step by step using the density functional theory at B3LYP level with a Lanl2DZ pseudopotential to understand the rules governing the structures obtained for the most stable clusters. After a characterization by means of the NBO population analysis and spin densities, the particular electronic structure of such species was confronted to their structural parameters and stability. It appears that the most stable structures can be described in an original way through resonance structures resulting from an analysis of Aum clusters into dimeric Au2 subunits. These are arranged so as to promote: 1. A good overlap between bonding σ and anti-bonding σ* areas belonging to different Au2 units. 2. A cyclic flow of electrons over the whole cluster. This model uses relatively simple chemical concepts in order to justify most of the structures already found in the literature as well as to establish a new approach explaining the structural transition from two- to three-dimensional configurations. Unraveling the Shape Transformation in Silicon Clusters Koblar Alan Jackson Mihai Horoi Indira Chaudhuri Alexandre Shvartsburg The prolate-to-spherical shape transition in GroupIV clusters has been a puzzle since its discovery over a decade ago. Here we explain this phenomenon by elucidating the structures of Sin and Si+n with n=20 27. The geometries were obtained in unbiased searches using a new ``big bang'' optimization method. They are substantially more stable than any found to date, and their ion mobilities and dissociation energies are in excellent agreement with experiment. The present results prove that the packing of midsize clusters is thermodynamically controlled and open the door to understanding the evolution of semiconductor nanosystems towards the bulk. Oxidation of Gold Clusters by Thiols Brian Barngrover Christine M. Aikens The formation of gold-thiolate nanoparticles via oxidation of gold clusters by thiols is examined in this work. Using the BP86 density functional with a triple ζ basis set, the adsorption of methylthiol onto various gold clusters Aun(Z) (n = 1-8, 12, 13, 20; Z = 0, -1, +1) and Au38(4+) is investigated. The rate-limiting step for the reaction of one thiol with the gold cluster is the dissociation of the thiol proton; the resulting hydrogen atom can move around the gold cluster relatively freely. Addition of a second thiol can lead to H2 formation and the generation of a gold-thiolate staple motif. The Becke Fuzzy Cells Integration Scheme in the Amsterdam Density Functional Program Suite In this article, we document a new implementation of the fuzzy cells scheme for numerical integration in polyatomic systems [Becke, J. Chem. Phys. 1998, 88, 2547] and compare its efficiency and accuracy with respect to an integration scheme based on the Voronoi space partitioning. We show that the accuracy of both methods is comparable, but that the fuzzy cells scheme is better suited for geometry optimization. For this method, we also introduce the locally dense grid concept and present a proof-of-concept application. © 2013 Wiley Periodicals, Inc. Extensions of the Time-Dependent Density Functional Based Tight-Binding Approach Adriel Dominguez Garcia The time-dependent density functional based tight-binding (TD-DFTB) approach is generalized to account for fractional occupations. In addition, an on-site correction leads to marked qualitative and quantitative improvements over the original method. Especially, the known failure of TD-DFTB for the description of \sigma -> \pi* and n -> \pi* excitations is overcome. Benchmark calculations on a large set of organic molecules also indicate a better description of triplet states. The accuracy of the revised TD-DFTB method is found to be similar to first principles TD-DFT calculations at a highly reduced computational cost. As a side issue, we also discuss the generalization of the TD-DFTB method to spin-polarized systems. In contrast to an earlier study [Trani et al., JCTC 7 3304 (2011)], we obtain a formalism that is fully consistent with the use of local exchange-correlation functionals in the ground state DFTB method. Self-consistent GW calculations for semiconductors and insulators Maxim Shishkin We present GW calculations for small and large gap systems comprising typical semiconductors (Si, SiC, GaAs, GaN, ZnO, ZnS, CdS, and AlP), small gap semiconductors (PbS, PbSe, and PbTe), insulators (C, BN, MgO, and LiF), and noble gas solids (Ar and Ne). It is shown that the G0W0 approximation always yields too small band gaps. To improve agreement with experiment, the eigenvalues in the Green's function G (GW0) and in the Green's function and the dielectric matrix (GW) are updated until self-consistency is reached. The first approximation leads to excellent agreement with experiment, whereas an update of the eigenvalues in G and W gives too large band gaps for virtually all materials. From a pragmatic point of view, the GW0 approximation thus seems to be an accurate and still reasonably fast method for predicting quasiparticle energies in simple sp-bonded systems. We furthermore observe that the band gaps in materials with shallow d states (GaAs, GaN, and ZnO) are systematically underestimated. We propose that an inaccurate description of the static dielectric properties of these materials is responsible for the underestimation of the band gaps in GW0, which is itself a result of the incomplete cancellation of the Hartree self-energy within the d shell by local or gradient corrected density functionals. From clusters to bulk: A relativistic density functional investigation on a series of gold clusters Aun, n = 6,…,147 Oliver Häberlen Sai-Cheong Chung Mauro Stener Notker Rösch A series of gold clusters spanning the size range from Au6 through Au147 (with diameters from 0.7 to 1.7 nm) in icosahedral, octahedral, and cuboctahedral structure has been theoretically investigated by means of a scalar relativistic all-electron density functional method. One of the main objectives of this work was to analyze the convergence of cluster properties toward the corresponding bulk metal values and to compare the results obtained for the local density approximation (LDA) to those for a generalized gradient approximation (GGA) to the exchange-correlation functional. The average gold–gold distance in the clusters increases with their nuclearity and correlates essentially linearly with the average coordination number in the clusters. An extrapolation to the bulk coordination of 12 yields a gold–gold distance of 289 pm in LDA, very close to the experimental bulk value of 288 pm, while the extrapolated GGA gold–gold distance is 297 pm. The cluster cohesive energy varies linearly with the inverse of the calculated cluster radius, indicating that the surface-to-volume ratio is the primary determinant of the convergence of this quantity toward bulk. The extrapolated LDA binding energy per atom, 4.7 eV, overestimates the experimental bulk value of 3.8 eV, while the GGA value, 3.2 eV, underestimates the experiment by almost the same amount. The calculated ionization potentials and electron affinities of the clusters may be related to the metallic droplet model, although deviations due to the electronic shell structure are noticeable. The GGA extrapolation to bulk values yields 4.8 and 4.9 eV for the ionization potential and the electron affinity, respectively, remarkably close to the experimental polycrystalline work function of bulk gold, 5.1 eV. Gold 4f core level binding energies were calculated for sites with bulk coordination and for different surface sites. The core level shifts for the surface sites are all positive and distinguish among the corner, edge, and face-centered sites; sites in the first subsurface layer show still small positive shifts. © 1997 American Institute of Physics. American Institute of Physics (AIP). Handbook PHYS TODAY Dwight E. Gray Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies Self-consistent-charge density-functional tight-binding method for simulations of complex materials properties D. Porezag G. Jungnickel We outline details about an extension of the tight-binding (TB) approach to improve total energies, forces, and transferability. The method is based on a second-order expansion of the Kohn-Sham total energy in density-functional theory (DFT) with respect to charge density fluctuations. The zeroth order approach is equivalent to a common standard non-self-consistent (TB) scheme, while at second order a transparent, parameter-free, and readily calculable expression for generalized Hamiltonian matrix elements may be derived. These are modified by a self-consistent redistribution of Mulliken charges (SCC). Besides the usual ``band structure'' and short-range repulsive terms the final approximate Kohn-Sham energy additionally includes a Coulomb interaction between charge fluctuations. At large distances this accounts for long-range electrostatic forces between two point charges and approximately includes self-interaction contributions of a given atom if the charges are located at one and the same atom. We apply the new SCC scheme to problems where deficiencies within the non-SCC standard TB approach become obvious. We thus considerably improve transferability. Simplified LCAO Method for the Periodic Potential Problem J. C. Slater G. F. Koster The LCAO, or Bloch, or tight binding, approximation for solids is discussed as an interpolation method, to be used in connection with more accurate calculations made by the cellular or orthogonalized plane-wave methods. It is proposed that the various integrals be obtained as disposable constants, so that the tight binding method will agree with accurate calculations at symmetry points in the Brillouin zone for which these calculations have been made, and that the LCAO method then be used for making calculations throughout the Brillouin zone. A general discussion of the method is given, including tables of matrix components of energy for simple cubic, face-centered and body-centered cubic, and diamond structures. Applications are given to the results of Fletcher and Wohlfarth on Ni, and Howarth on Cu, as illustrations of the fcc case. In discussing the bcc case, the splitting of the energy bands in chromium by an antiferromagnetic alternating potential is worked out, as well as a distribution of energy states for the case of no antiferromagnetism. For diamond, comparisons are made with the calculations of Herman, using the orthogonalized plane-wave method. The case of such crystals as InSb is discussed, and it is shown that their properties fit in with the energy band picture. Adsorption and dimerisation of thiol molecules on Au(111) using a Z-matrix approach in density functional theory MOL SIMULAT Michael J Ford R. C. Hoft Julian D. Gale The adsorption energetics of methanethiolate and benzenethiolate on Au(111) have been calculated using periodic density functional theory (DFT), based on the SIESTA methodology, with an internal coordinates implementation for geometry input and structure optimisation. Both molecules are covalently bound with interaction energies of 1.85 and 1.43 eV for methanethiolate and benzenethiolate, respectively. The preferred binding site is slightly offset from the bridge site in both cases towards the fcc-hollow. The potential energy surfaces (PES) have depths of 0.36 and 0.22 eV, the hollow sites are local maxima in both cases, and there is no barrier to diffusion of the molecule at the bridge site. The corresponding dimers are weakly bound for methanethiolate and benzenethiolate, with binding energies of 0.38 and 0.16 eV, respectively, and the preferred binding geometry is with the two sulphur atoms close to adjacent atop sites. The barrier to dissociation of the dimer dimethyl disulphide is estimated to lie between 0.3 and 0.35 eV. Coverage and charge dependent adsorption of butanethiol on the Au(1 1 1) surface: A density functional theory study Roger Nadler Rocío Sánchez-de-Armas Javier Fdez a b s t r a c t A theoretical study based on periodic DFT calculations of the structure, the surface bonding, and the ener-getics of butanethiols adsorbed on the Au(1 1 1) surface is reported. Several sites and coverage have been considered, and neutral and charged metal surfaces have been simulated. Whatever the coverage is, the preferred site is a hollow-bridge type-site in which sulfur atom simultaneously binds to two gold atoms. Thiol adsorption parameters are sensitive to the coverage, especially the adsorption energy that shows a clear response to the number of thiols adsorbed, and the thiol-surface interaction decreases when the coverage grows as the lateral repulsion between the alkyl tails weaken the strength of the Au-S bond. The thiol-surface interaction parameters are also sensitive to the charge of the metal. Also, we found that the adsorption becomes more favorable when the metal surface is negatively charged, and less favorable on positive surfaces. Finally, in order to analyze dynamical effects, we performed a molecular dynamics (MD) simulation considering a system with ethanethiol on an Au slab pre-covered by water. The MD sim-ulation shows that the proton transfer occurs within a few femtoseconds and that prior to the transfer itself, the sulfur atom binds to a gold surface atom. This Au atom is clearly pulled out of the surface, what could be interpreted as the onset of the islands and cluster formation already observed in the low cover-age regime. Decomposition of Methylthiolate Monolayers on Au(111) Prepared from Dimethyl Disulfide in Solution Phase F. P. Cometto V. A. Macagno Patricia Paredes-Olivera E. M. Patrito We investigated the formation and stability of layers of methylthiolate prepared on the Au(111) surface by the method of immersion in an ethanolic solution of dimethyl disulfide (DMDS). The surface species were characterized by electrochemical reductive desorption and high-resolution photoelectron spectroscopy. Both techniques confirmed the formation of a methylthiolate monolayer at short immersion times (around 1 min). As the immersion time increased, the electrochemical experiments showed the disappearance of the methylthiolate reductive desorption current peak and the appearance of a current peak at ca. −0.9 V which was attributed to sulfur species. At long immersion times, the XPS measurements showed two main components for the S 2p signal: a component at ca. 161 eV which corresponds to atomic sulfur and a component at ca. 162 eV which we attributed to polysulfide species. We propose that the breakage of the S−C bond of methylthiolate is responsible for the appearance of sulfur species on the surface. Density functional theory (DFT) calculations were performed to identify the elementary steps that may lead to the decomposition of methylthiolate. We found that the cleavage of the S−C bond is only activated by the oxidative dehydrogenation of the methyl group of methylthiolate. Thio-oxymethylene, SCH2O, is the key intermediate leading to the breakage of the S−C bond because it decomposes into atomic sulfur and formaldehyde with an activation energy barrier of only 1.1 kcal/mol. A QM/MM implementation of the self-consistent charge density functional tight binding (SCC-DFTB) method Martin Karplus A quantum mechanical/molecular mechanical (QM/MM) approach based on an approximate density functional theory, the so-called self-consistent charge density functional tight binding (SCC-DFTB) method, has been implemented in the CHARMM program and tested on a number of systems of biological interest. In the gas phase, SCC-DFTB gives reliable energetics for models of the triosephosphate isomerase (TIM) catalyzed reactions. The rms errors in the energetics compared to B3LYP/6-31+G(d,p) are about 2−4 kcal/mol; this is to be contrasted with AM1, where the corresponding errors are 9−11 kcal/mol. The method also gives accurate vibrational frequencies. For the TIM reactions in the presence of the enzyme, the overall SCC-DFTB/CHARMM results are in somewhat worse agreement with the B3LYP/6-31+G(d,p)/CHARMM values; the rms error in the energies is 5.4 kcal/mol. Single-point B3LYP/CHARMM energies at the SCC-DFTB/CHARMM optimized structures were found to be very similar to the full B3LYP/CHARMM values. The relative stabilities of the αR and 310 conformations of penta- and octaalanine peptides were studied with minimization and molecular dynamics simulations in vacuum and in solution. Although CHARMM and SCC-DFTB give qualitative different results in the gas phase (the latter is in approximate agreement with previous B3LYP calculations), similar behavior was found in aqueous solution simulations with CHARMM and SCC-DFTB/CHARMM. The 310 conformation was not found to be stable, and converted to the αR form in about 15 ps. The αR conformation was stable in the simulation with both SCC-DFTB/CHARMM and CHARMM. The i,i+3 CO···HN distances in the αR conformation were shorter with the SCC-DFTB method (2.58 Å) than with CHARMM (3.13 Å). With SCC-DFTB/CHARMM, significant populations with i,i+3 CO···HN distances near 2.25 Å, particularly for the residues at the termini, were found. This can be related to the conclusion from NMR spectroscopy that the 310 configuration contributes for alanine-rich peptides, especially at the termini. Thiols and Disulfides on the Au(111) Surface: The Headgroup−Gold Interaction Henrik Grönbeck Alessandro Curioni Wanda Andreoni How thiols and disulfides bind to gold surfaces to form self-assembled monolayers is a long-standing open question. In particular, determining the nature itself of the anchor groups and of their interaction with the metal is a priority issue, which has so far been approached only with oversimplified models. We present ab initio calculations of the adsorption configurations (dissociative and not) of methanethiol and dimethyl disulfide on Au(111) at low coverage, which are based on density functional theory using gradient-corrected exchange-correlation functionals. A complete characterization of their structure, binding energies, and type of bonding is obtained. It is established that dissociation is clearly favored for the disulfide with subsequent formation of strongly bound thiolates, in agreement with experimental evidence, whereas thiolates resulting from S−H bond cleavage in thiols can coexist with the adsorbed "intact" species and become favored if accompanied by the formation of molecular hydrogen. An Improved Self-Consistent-Charge Density-Functional Tight-Binding (SCC-DFTB) Set of Parameters for Simulation of Bulk and Molecular Systems Involving Titanium Grygoriy Dolgonos Ney Henrique Moreira A new self-consistent-charge density-functional tight-binding (SCC-DFTB) set of parameters for Ti−X pairs of elements (X = Ti, H, C, N, O, S) has been developed. The performance of this set has been tested with respect to TiO2 bulk phases and small molecular systems. It has been found that the band structures, geometric parameters, and cohesive energies of rutile and anatase polymorphs are in good agreement with the reference DFT data and with experiment. Low-index rutile and anatase surfaces were also tested. For molecular systems, binding and atomization energies close to their DFT analogues have been achieved. Large errors, however, have been found for systems in high-spin states and/or having multireference character of their wave functions. The correct performance of SCC-DFTB for surface reactions has been demonstrated via the water splitting on anatase (001) surface. The current SCC-DFTB set is a suitable tool for future in-depth investigation of chemical processes occurring on the surfaces of TiO2 polymorphs as well as for other processes of physicochemical interest. Quadratic integration over the three-dimensional Brillouin zone G. Wiesenekker A new method is described to evaluate integrals of quadratically interpolated functions over the three-dimensional Brillouin zone. The method is based on the method of the authors for analytic quadratic integration over the two-dimensional Brillouin zone. It uses quadratic interpolation not only for the dispersion relation epsilon (k), but for property functions f(k) as well. The method allows a 'machine accuracy' evaluation of the integrals and may therefore be regarded as equivalent to a truly analytic evaluation of the integrals. It is compared to other methods of integral approximation by calculating tight-binding Brillouin zone integrals using the same number of k-points for all methods. Also shown are cohesive energy calculations for a number of elements. When the quadratic method is compared to the commonly used linear method, it is found that far fewer k-points are needed to obtain a desired accuracy. Projector Agmented-Wave Method Peter E. Blöchl An approach for electronic structure calculations is described that generalizes both the pseudopotential method and the linear augmented-plane-wave (LAPW) method in a natural way. The method allows high-quality first-principles molecular-dynamics calculations to be performed using the original fictitious Lagrangian approach of Car and Parrinello. Like the LAPW method it can be used to treat first-row and transition-metal elements with affordable effort and provides access to the full wave function. The augmentation procedure is generalized in that partial-wave expansions are not determined by the value and the derivative of the envelope function at some muffin-tin radius, but rather by the overlap with localized projector functions. The pseudopotential approach based on generalized separable pseudopotentials can be regained by a simple approximation. The magic gold cluster Au20 INT J QUANTUM CHEM Eugene S Kryachko F. Remacle The 20-nanogold cluster Au20 exhibits a large variety of two- and three-dimensional isomeric forms. Among them is the ground-state isomer Au20(Td) representing the stable cluster with a unique tetrahedral shape, with all atoms on the surface, and large HOMO-LUMO gap which even slightly exceeds that of the buckyball fullerene C60. The anionic cluster Au(Td) that holds its parent tetrahedral symmetry features a high catalytic activity. The list of the properties of the 20-nanogold clusters surveyed in the present work ranges from the energetic order of stability of its isomers to the optical absorption and excitation spectra of the Au20(Td) cluster. We also report the structures and properties of its doubly charged clusters Au and Au and computationally confirm that Au is indeed stable. The zero-point-energy-corrected adiabatic second electron affinity of Au20(Td) amounts to 0.43–0.53 eV that is consistent with the experimental data. In addition, we provide computational evidence of the existence of the novel, hollow cage isomer of Au20 and analyze its key properties. © 2007 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 Hybrid SCC‐DFTB/molecular mechanical studies of H‐bonded systems and of N‐acetyl‐(L‐Ala)nN′‐methylamide helices in water solution Wen-Ge Han Du K. J. Jalkanen A hybrid quantum mechanical (QM) and molecular mechanical (MM) approach has been developed and used to study the aqueous solvation effect on biological systems. The self-consistent charge density functional tight-binding (SCC-DFTB) method is employed to perform the quantum mechanical calculations in the QM part, while the AMBER 4.1 force field is used to perform the molecular mechanical calculations in the MM part. The coupling terms between these two parts include electrostatic and van der Waal's interactions. As a test of feasibility, this approach has been first applied to some small systems H-bonded with water molecule(s), and very good agreement with the ab initio results has been achieved. The hybrid potential was then used to investigate the solvation effect on the capped (L-Ala)n helices with n=4, 5, 8 and 11. (L-Ala)n was treated with the SCC-DFTB method and the water molecules with the TIP3P water model. It has been shown that, in gas phase, the α helices of (L-Ala)n are less stable than the corresponding 310 helices. In water solution, however, the α helices are stabilized and, compared with 310 helices, the α helices have stronger charge–charge interactions with the surrounding water molecules. This may be explained by the larger dipole moment of α helices in aqueous solution, which will influence and organize the orientations of the surrounding water molecules. © 2000 John Wiley & Sons, Inc. Int J Quant Chem 78: 459–479, 2000 Quantum Sized Gold Nanoclusters with Atomic Precision Huifeng Qian Manzhou Zhu Zhikun Wu Rongchao Jin Gold nanoparticles typically have a metallic core, and the electronic conduction band consists of quasicontinuous energy levels (i.e. spacing δ ≪ kBT, where kBT is the thermal energy at temperature T (typically room temperature) and kB is the Boltzmann constant). Electrons in the conduction band roam throughout the metal core, and light can collectively excite these electrons to give rise to plasmonic responses. This plasmon resonance accounts for the beautiful ruby-red color of colloidal gold first observed by Faraday back in 1857.
CommonCrawl
SGI 2021 Summer Geometry Initiative Category: SGI research projects Demo SGI research projects Self-similarity loss for shape descriptor learning in correspondence problems Post author By Kinjal Parikh No Comments on Self-similarity loss for shape descriptor learning in correspondence problems By Faria Huq, Kinjal Parikh, Lucas Valença During the 4th week of SGI, we worked closely with Dr. Tal Shnitzer to develop an improved loss function for learning functional maps with robustness to symmetric ambiguity. Our project goal was to modify recent weakly-supervised works that generated deep functional maps to make them handle symmetric correspondences better. Shape correspondence is a task that has various applications in geometry processing, computer graphics, and computer vision – quad mesh transfer, shape interpolation, and object recognition, to name a few. It entails computing a mapping between two objects in a geometric dataset. Several techniques for computing shape correspondence exist – functional maps is one of them. A functional map is a representation that can map between functions on two shapes' using their eigenbases or features (or both). Formally, it is the solution to \(\mathrm{arg}\min_{C_{12}}\left\Vert C_{12}F_1-F_2\right\Vert^2\), where \(C_{12}\) is the functional map from shape \(1\) to shape \(2\) and \(F_1\), \(F_2\) are corresponding functions projected onto the eigenbases of the two shapes, respectively. Therefore, there is no direct mapping between vertices in a functional map. This concise representation facilitates manipulation and enables efficient inference. Figure 1: The approach proposed by Sharma and Ovsjanikov may fail on symmetric regions. Notice the hands, which have not been matched correctly due to their symmetric structure. Recently, unsupervised deep learning methods have been developed for learning functional maps. One of the main challenges in such shape correspondence tasks is learning a map that differentiates between shape regions that are similar (due to symmetry). We worked on tackling this challenge. We build upon the state-of-the-art work "Weakly Supervised Deep Functional Map for Shape Matching" by Sharma and Ovsjanikov, which learns shape descriptors from raw 3D data using a PointNet++ architecture. The network's loss function is based on regularization terms that enforce bijectivity, orthogonality, and Laplacian commutativity. This method is weakly supervised because the input shapes must be equally aligned, i.e., share the same 6DOF pose. This weak supervision is required because PointNet-like feature extractors cannot distinguish between left and right unless the shapes share the same pose. To mitigate the same-pose requirement, we explored adding another component to the loss function Contextual Loss by Mechrez et al. Contextual Loss is high when the network learns a large number of similar features. Otherwise, it is low. This characteristic promotes the learning of global features and can, therefore, work on non-aligned data. Model architecture overview: As stated above, our basic model architecture is similar to "Weakly Supervised Deep Functional Map for Shape Matching" by Sharma and Ovsjanikov. We use the basic PointNet++ architecture and pass its output through a \(4\)-layer ResNet model. We use the output from ResNet as shape features to compute the functional map. We randomly select \(4000\) vertices and pass them as input to our PointNet++ architecture. Data augmentation: We randomly rotated the shapes of the input dataset around the "up" axis (in our case, the \(y\) coordinate). Our motivation for introducing data augmentation is to make the learning more robust and less dependent on the data orientation. Contextual loss: We explored two ways of adding contextual loss as a component: Self-similarity: Consider a pair of input shapes (\(S_1\), \(S_2\)) of features \(P_1\) with \(P_2\) respectively. We compute our loss function as follows: \(L_{CX}(S_1, S_2) = -log(CX(P_1, P_1)) – log(CX(P_2, P_2))\), where \(CX(x, y)\) is the contextual similarity between every element of \(x\) and every element of \(y\), considering the context of all features in \(y\). More intuitively, the contextual loss is applied on each shape feature with itself (\(P_1\) with \(P_1\) and \(P_2\) with \(P_2\)), thus giving us a measure of 'self-similarity' in a weakly-supervised way. This measure will help the network learn unique and better descriptors for each shape, thus alleviating errors from symmetric ambiguities. Projected features: We also explored another method for employing the contextual loss. First, we project the basis \(B_1\) of \(S_1\) onto \(S_2\), so that \(B_{12} = B_ 1 \cdot C_{12}\). Similarly, \(B_{21} = B_2 \cdot C_{21}\). Note that the initial bases \(B_1\) and \(B_2\) are computed directly from the input shapes. Next, we want the projection \(B_{12}\) to get closer to \(B_2\) (the same applies for \(B_{21}\) and \(B_1\)). Hence, our loss function becomes: \(L_{CX}(S_1, S_2) = -log(CX(B_{21}, B_1)) – log(CX(B_{12}, B_2))\). Our motivation for applying this loss function is to reduce symmetry error by encouraging our model to map the eigenbases using \(C_{12}\) and \(C_{21}\) more accurately. Geodesic error: For evaluating our work, we use the metric of average geodesic error between the vertex-pair mappings predicted by our models and the ground truth vertex-pair indices provided with the dataset. We trained six different models on the FAUST dataset (which contains \(10\) human shapes at the same \(10\) poses each). Our training set includes \(81\) samples, leaving out one full shape (all of its \(10\) poses) and one full pose (so the network never sees any shape doing that pose). These remaining \(19\) inputs are the test set. Additionally, during testing we used ZoomOut by Melzi et al. Model Data Augmentation Contextual Loss Base False None BA True None SS False Self Similarity SSA True Self Similarity PF False Projected Features PFA True Projected Features Table 1: We trained 7 models using the settings described in this table Figure 2: The AUC (area under curve) curves and values with the average geodesic error for each model. For qualitative comparison purposes, the following GIF displays a mapping result. Overall, the main noticeable visual differences between our work and the one we based ourselves on appeared when dealing with symmetric body parts (e.g., hands and feet). Figure 3: Comparison between shape correspondence produced by the models described in Table 1. Still, as it can be seen, the symmetric body parts remain the less accurate mappings in our work too, meaning there's still much to improve. We thoroughly enjoyed working on this project during SGI and are looking forward to investigating this problem further, to which end we will continue to work on this project after the program. We want to extend our deepest gratitude to our mentor, Dr. Tal Shnitzer, for her continuous guidance and patience. None of this would be possible without her. Thank you for taking the time to read this post. If there are any questions or suggestions, we're eager to hear them! Tags Deep Learning, Functional Map, Weakly Supervised Learning SGI research projects 3D Shape Correspondence via Probabilistic Synchronization of Functional Maps and Riemannian Geometry Post author By Faria 2 Comments on 3D Shape Correspondence via Probabilistic Synchronization of Functional Maps and Riemannian Geometry By SGI Fellows Faria Huq, Sahra Yusuf and Berna Kabadayi Over the first 2 weeks of SGI (July 26 – August 6, 2021), we have been working on the "Probabilistic Correspondence Synchronization using Functional Maps" project under the supervision of Nina Miolane and Tolga Birdal with TA Dena Bazazian. In this blog, we motivate and formulate our research questions and present our first results. Finding a meaningful correspondence between two or more shapes is one of the most fundamental shape analysis tasks. Van Kaick et al (2010) stated the problem as follows: given input shapes \(S_1, S_2,…, S_N\), find a meaningful relation (or mapping) between their elements. Shape correspondence is a crucial building block of many computer vision and biomedical imaging algorithms, ranging from texture transfer in computer graphics to segmentation of anatomical structures in computational medicine. In the literature, shape correspondence has been also referred to as shape matching, shape registration, and shape alignment. Figure 1: Some application examples where shape correspondence is used. Courtesy: van Kaick, 2010. In this project, we consider a dataset of 3D shapes and represent the correspondence between any two shapes using the concept of "functional maps" [see section 1]. We then show how the technique of "synchronization" [see section 2] allows us to improve our computations of shape correspondences, i.e., allows us to refine some initial estimates of functional maps. During this project, we implement a method of synchronization using tools from geometric statistics and optimization on a Riemannian manifold describing functional maps, the Stiefel manifold, using the packages geomstats and pymanopt. 1. Representing Shape Correspondences with Functional Maps Shape correspondence is a well-studied problem that applies to both rigid and non-rigid matching. In the rigid scenario, the transformation between two given 3D shapes can be described by a 3D rotation and a 3D translation. If there is a non-rigid transformation between the two 3D shapes, we can use point-to-point correspondences to model the shape matching. However, as mentioned in Ovsjanikov et al (2012), many practical situations make it either impossible or unnecessary to establish point-to-point correspondences between a pair of shapes, because of inherent shape ambiguities or because the user may only be interested in approximate alignment. To tackle that problem, functional maps are introduced in Ovsjanikov et al (2012) and can be considered a generalization of point-to-point correspondences. By definition, a functional map between two shapes is a map between functions defined on these shapes. A function defined on a shape assigns a value to each point of the shape and can be represented as a heatmap on the shape; see Figure 2 for functions defined on a cat and a tiger respectively. Then, the functional map of Figure 2 maps the function defined on the first shape — the cat — to a function on the second shape — the tiger. All in all, instead of computing correspondences between points on the shapes, functional maps compute mappings between functions defined on the shapes. In practice, the functional map is represented by a \(p \times q\) matrix, called C. Specifically, we consider p basis functions defined on the source shape (e.g. the cat) and q basis functions defined on the target shape (e.g. the tiger). Typically, we can consider the eigenfunctions of the Laplace-Beltrami operator. The functional map C then represents how each of the p basis functions of the source shape is mapped onto the set of q basis functions of the target shape. Figure 2: Functional map mapping a function defined on a cat ("source shape") to a function defined on a tiger ("target shape"). The functional map is represented by a matrix that describes how each of the 20 basis functions chosen on the source shape is mapped onto the set of 20 basis functions chosen on the target shape. (generated using PyFM) 2. Problem Formulation: Synchronization Improves Shape Correspondences In this section, we describe the technique of "synchronization" that allows us to improve the computations of functional maps between shapes, and we introduce the related notations. We use subsets of the TOSCA dataset (http://tosca.cs.technion.ac.il/book/resources_data.html), which contains a dataset of 3D shapes of cats, to illustrate the concepts. Intuitively, the method of synchronization builds a graph that links pairs of 3D shapes within the input dataset by edges representing their functional maps; see Figure 3 for an example of such a graph with 4 shapes and 6 edges. The method of synchronization then leverages the property of cycle consistency within the graph. Cycle consistency is a concept that enforces a global agreement among the shape matchings in the graph, by distributing and minimizing the errors to the entire graph — so that not only pairwise relationships are modeled but also global ones. This is why, intuitively, synchronization allows us to refine the functional maps between pairs of shapes by extracting global information over the entire shape dataset. Let's now introduce notations to give the mathematical formulation of the synchronization method. We begin with a set of pairwise functional maps, \(C_{ij} \in \{\mathcal{F}(M_i,M_j)\}_{i,j}\) for \((i,j) \in \varepsilon\). M denotes the set of \(n\) input 3D shapes and \(\varepsilon\) denotes the set of the edges of the directed graph of \(n\) nodes representing the shapes: \(\varepsilon \subset \{1,\dots, n\} \times \{1,\dots, n\}\). We are then interested in finding the underlying "absolute functional maps" \(C_i\) for \(i\in\{1,\dots,n\}\) with respect to a common origin (e.g. \(C_0=I\), the identity matrix) that would respect the consistency of the underlying graph structure. We illustrate these notations in Figure 3. Given a graph of four cat shapes (cat8, cat3, cat9, cat4) and their relative functional maps (C83, C94, C34, C89, C48, C39), we want to find the absolute functional maps (C8, C3, C4, C9). Figure 3: Synchronization between four shapes from the TOSCA dataset. We are given the functional maps Cij between pairs of shapes (i, j) in the graph. We wish to find the absolute functional map of these shapes: the Ci for each shape i in the graph. 3. Riemannian Optimization Algorithm In this section, we explain how we frame the synchronization problem as an optimization problem on a Riemannian manifold. Constraints: As shown by Ovsjanikov et al (2012), functional maps are orthogonal matrices for near-isometry: the columns forming functional map matrix C are orthonormal vectors. Hence, functional maps naturally reside on the Stiefel manifold which is the manifold of matrices with orthonormal columns. The Stiefel manifold can be equipped with a Riemannian geometric structure that is implemented in the packages geomstats and pymanopt. We design a Riemannian optimization algorithm that iterates over the Stiefel manifold (or power manifold of Stiefels) using elementary operations of Riemannian geometry, and converges to our estimates of absolute functional maps. Later, we compare this method to the corresponding non-Riemannian optimization algorithm that does not constrain the iterates to belong to the Stiefel manifold. Cost function: We describe here the cost function associated with the Riemannian optimization. We want our absolute functional maps to be consistent with the relative maps as much as possible. That is, in the ideal case, \(C_i \cdot C_{ij} = C_j \). To abide by this, we consider the following cost function in our algorithm: \(\mathop{\mathrm{argmin}} _{{C_i \in \mathcal{F}}} \sum\limits_{(i,j)\in \varepsilon} {{\| C_i \cdot C_{ij} – C_j \|_F}^2 } \) In this equation, we measure this norm in terms of the Frobenius norm, which is equal to the square root of the sum of the squares of the matrix values. This cost is implemented in Python, as follows: # Cis are the maps we compute, input_Cijs are the input functional maps def cost_function(Cis, input_Cijs): cost = 0. for edge, Cij in zip(EDGES, input_Cijs): i, j = edge Ci_Cij = np.matmul(Cis[i], Cij) cost += np.linalg.norm(Ci_Cij - Cis[j], "fro") ** 2 Optimizer: Our goal is to minimize this cost with respect to the Cis, while constraining the Cis to remain on the Stiefel manifold. For solving our optimization problem, we use the TrustRegion algorithm from the Pymanopt library. 4. Experiments and Results We present some preliminary results on the TOSCA dataset. Dataset: For our baseline experiments, we use the 11 cat shapes from the TOSCA dataset. The cats have a wide variety of poses and deformations which make them particularly suitable for our baseline testing. Initial Functional Map Generation: For each pair of cats within the TOSCA dataset, we compute initial functional maps using state-of-the-art methods, by adapting the source code from here. Our functional maps are of size 20 * 20, i.e they are written in terms of 20 basis functions on the source shape and 20 basis functions on the target shape. Graph Generation: We implement a random graph generator using networkX that can generate cycle-consistent graphs. We use this graph generator to generate a graph, with nodes corresponding to shapes from a subset of the TOSCA dataset, and with edges depicting the correspondence relationship between the selected shapes. In our future experiments, this graph generator will allow us to compare the performance of our method depending on the number of shapes (or nodes) and the number of known initial correspondences (or initial functional maps) between the shapes (or sparsity of the graph). Perturbation: We showcase the efficiency of our algorithm by inducing a synthetic perturbation on the initial functional maps and showing how synchronization allows us to correct it. Specifically, we add a Gaussian perturbation to the initial functional maps, with standard deviation s = 0.1, and project the resulting corrupted matrices on the Stiefel manifold to get "corrupted functional maps". The corrupted functional maps are the inputs of our optimization algorithm. Results: We demonstrate the result of our optimization algorithm for a subset of the TOSCA dataset and initial (corrupted) functional maps generated with the methods described above: we consider the graph with 4 shapes and 6 edges that was presented in Figure 3. The optimization algorithm converged in less than 20 iterations and less than 1 second. Figure 4 shows the output of our optimization algorithm on this graph. Specifically, we consider a function defined on one of the cats, and map it to the other cats using our functional maps. We can see how efficiently different body parts are being meaningfully mapped in each of these cat shapes. Figure 4. The absolute functional map output is visualized as point-to-point mapping, The regions marked with the same colors on the four shapes are being mapped to each other (noise standard deviation = 0.1) Comparison with initially corrupted functional maps: We show how synchronization allows us to improve over our initially corrupted functional maps. Figure 5 shows a comparison between the corrupted functional maps, the ground-truth (un-corrupted) functional maps, and the output functional maps computed via the output absolute maps given by our algorithm, for the correspondence between cat 4 and cat 8. While the algorithm does not allow to fully recover the ground-truth, our analysis shows that it has refined the initially corrupted functional maps. For example, we observe that the noise has been reduced for the off-diagonal elements. This visual inspection is confirmed quantitatively: the distance between the initially corrupted functional maps and the ground-truth is 2.033 while the distance between the output functional maps and the ground-truth is 1.853, as observed in Table 1. However, the results on the other functional maps C83, C39, C94, C89, C34 are less convincing; see errors reported in Table 1. This can be explained by the fact that we have chosen a relatively small graph, with only 4 nodes i.e. 4 shapes. As a consequence, the synchronization method leveraging cycle consistency can be less efficient. Future results will investigate the performances of our approach for different graphs with different numbers of nodes and different numbers of edges (sparsity of the graph). Func. Maps. Error C83 Error C39 Error C48 Error C94 Error C89 Error C34 Before sync. 1.762 1.658 2.033 1.877 1.819 1.854 After sync. 1.659 2.519 1.741 2.323 2.118 2.492 After sync. (Riem.) 2.576 2.621 1.853 1.984 2.181 2.519 Table 1. Frobenius norms of the difference between different functional maps and the ground-truth functional maps Discussion: We compare this method with the method that does not restrict the functional maps to be on the Riemannian Stiefel manifold. We observe similar performances, with a preference for the non-Riemannian optimization method. Future work will investigate which data regimes require optimization with Riemannian constraints and which regimes do not necessarily require it. Figure 5. Result for the functional map relating cat 4 to cat 8. Left: Corrupted functional maps that are the inputs of our algorithm. Middle: ground-truth functional maps, i.e. functional maps before corruption with synthetic noise. Right: Functional maps given by the output of our algorithm. In this blog, we explained the first steps of our work on shape correspondences. Currently, we are working on a Markov Chain Monte Carlo (MCMC) implementation to get better results with associated uncertainty quantification. We are also working on a custom gradient function to get better performance. In the coming weeks, we look forward to experimenting with other benchmark datasets and comparing our results with baselines. Acknowledgment: We thank our mentors, Dr. Nina Miolane and Dr. Tolga Birdal for their consistent guidance and mentorship. They have relentlessly supported us from the beginning, debugged our errors. We especially thank Nina for guiding us to write this blog post. We are very grateful for her keen enthusiasm 😇. We also thank our TA, Dr. Dena Bazazian for her important feedback. Source code: See notebook here Tags Demo, Functional Map, Riemannian Geometry, SGI Project Minimal Surfaces, But With Saddle Points Post author By Olga Guțan No Comments on Minimal Surfaces, But With Saddle Points By Natasha Diederen, Alice Mehalek, Zhecheng Wang, and Olga Guțan This week we worked on extending the results described here. We learned an array of new techniques and enhanced existing skills that we had developed the week(s) before. Here is some of the work we have accomplished since the last blog post. One of the improvements we made was to create a tiling function which created an \( n^3 \) grid of our triply periodic surfaces, so that we were better able to visualise them as a structure. We started off with a surface inside a \( [-1, 1]^3 \) cube, and then imagined an \(n^3\) grid (pictured below). To make a duplicate of the surface in one of these grid cubes, we considered how much the surface would need to be translated in the \(x\), \(y\) and \(z\) directions. For example to duplicate the original surface in the black cube into the green square, we would need to shift all the \(x\)-coordinates in the mesh by two (since the cube is length two in all directions) and leave the \(y\)- and \(z\)-coordinates unchanged. Similarly, to duplicate the original surface into the purple square, we would need to shift the all \(x\)-coordinates in the mesh by two, all the \(y\)-coordinates by two, and all the \(z\)-coordinates by \(2n\). Figure 1. A visualization of the the surface tiling. To copy the surface \(n^3\) times into the right grid cubes, we need find all the unique permutations of groups of three vectors chosen from \((0,2,4, \dots, 2n)\) and add them to the vertices matrix of the of the mesh. To update the face data, we add multiples of the number of vertices each time we duplicate into a new cube. With this function in place, we can now see our triply periodic surfaces morphing as a larger structure. Figure 2. A 3x3x3 Tiling of the Surface. A skill we continued developing and something we have grown to enjoy, is what we affectionately call "blendering." To speak in technical terms, we use the open-source software Blender to generate triangle meshes that we, then, use as tests for our code. For context: Blender is a free and open-source 3D computer graphics software tool set used for a plethora of purposes, such as creating animated films, 3D printed models, motion graphics, virtual reality, and computer games. It includes many features and it, truly, has endless possibilities. We used one small compartment of it: mesh creation and mesh editing, but we look forward to perhaps experiencing more of its possibilities in the future. We seek to create shapes that are non-manifold; mathematically, this means that there exist local regions in our surface that are not homeomorphic to a subset of the Euclidean space. In other words, non-manifold shapes contain junctions where more than two faces meet at an edge, or more than two faces meet at a vertex without sharing an edge. Figure 3. Manifold versus nonmanifold edges and vertices. Source. This is intellectually intriguing to consider, because most standard geometry processing methods and techniques do not consider this class of shapes. As such, most algorithms and ideas need to be redesigned for non-manifold surfaces. Our Blender work consisted of a lot of trial-and-error. None of us had used Blender before, so the learning curve was steep. Yet, despite the occasional frustration, we persevered. With each hour worked, we increased our understanding and expertise, and in the end we were able to generate surfaces we were quite proud of. Most importantly, these triangle meshes have been valuable input for our algorithm and have helped us explore in more detail the differences between manifold and non-manifold surfaces. Figure 4. Manifold and Nonmanifold Periodic Surfaces. The new fellows joining this week came from a previous project on minimal surface led by Stephanie Wang, which used Houdini as a basis for generating minimal surfaces. Thus, we decided we could use Houdini to carry out some physical deformations, to see how non-manifold surfaces performed compared to similar manifold surfaces. We used a standard Houdini vellum solver with some modifications to simulate a section of our surface falling under gravity. Below are some of the simulations we created. Figure 5. A Nonmanifold and a Manifold Surface Experiencing Gravity. Newton's Method When we were running Pinkhall and Polthier's algorithm on our surfaces, we noticed that that algorithm would not stop at a local saddle point such as the Schwarz P surface, but would instead run until there was only a thin strip of area left, which is indeed a minimum, but not a very useful one. Therefore, we switched to Newton's Method to solve our optimization problem. We define the triangle surface area as an energy: let the three vertices of a triangle be \(\mathbf{q}_1\), \(\mathbf{q}_2\), \(\mathbf{q}_3\). Then \(E = \frac{1}{2} \|\mathbf{n}\| \), where \(\mathbf{n} = (\mathbf{q}_2-\mathbf{q}_1) \times (\mathbf{q}_3-\mathbf{q}_1)\) is the surface area normal. Then we derive its Jacobian and Hessian, and construct the Jacobian and Hessian for all faces in the mesh. However, this optimization scheme still did not converge to the desired minimum, perhaps because our initialization is far from the solution. Additionally, one of our project mentors implemented the approach in C++ and, similarly, no results ensued. Later, we tried to add line search to Newton's Method, but also no luck. Although our algorithm still does not converge to some minimal surfaces which we know to exist, it has generated the following fun bugs. In the previous blog post, we discussed studying the physical properties of nonmanifold TPMS. Over the past week, we used the Vellum Solver in Houdini and explored some of the differences in physical properties between manifold and nonmanifold surfaces. However, this is just the beginning — we can continue to expand our work in that direction. Additional goals may include writing a script to generate many possible initial conditions, then converting the initial conditions into minimal surfaces, either by using the Pinkall and Polthier algorithm, or by implementing another minimal-surface-generating algorithm. More work can be done on enumerating all of the possible nonmanifold structures that the Pinkall and Polthier algorithm generates. The researchers can, then, categorize the structures based on their geometric or physical properties. As mentioned last week, this is still an open problem. We would like to thank our mentors Etienne Vouga, Nicholas Sharp, and Josh Vekhter for their patient guidance and the numerous hours they spent helping us debug our Matlab code, even when the answers were not immediately obvious to any of us. A special thanks goes to Stephanie Wang, who shared her Houdini expertise with us and, thus, made a lot of our visualizations possible. We would also like to thank our Teaching Assistant Erik Amezquita. Robust computation of the Hausdorff distance between triangle meshes Post author By Deniz No Comments on Robust computation of the Hausdorff distance between triangle meshes Authors: Bryce Van Ross, Talant Talipov, Deniz Ozbay The SGI project titled Robust computation of the Hausdorff distance between triangle meshes originally was planned for a 2 week duration, and due to common interest in continuing, was extended to 3 weeks. This research was led by mentor Dr. Leonardo Sacht of the Department of Mathematics of the Federal University of Santa Catarina (UFSC), Brazil. Accompanying support was TA Erik Amezquita, and the project team consisted of SGI fellows, math (under)graduate students, Deniz Ozbay, Talant Talipov, and Bryce Van Ross. Here is an introduction to the idea of our project. The following is a summary of our research and our experiences, regarding computation of the Hausdorff distance. Given two triangle meshes A, B in R³, the following are defined: 1-sided Hausdorff distance h: $$h(A, B) = \max\{d(x, B) : x\in A\} = \max\{\min\{\|x-y\|: x\in A\}: y\in B\}$$ where d is the Euclidean distance and |x-y| is the Euclidean norm. Note that h, in general, is not symmetric. In this sense, h differs from our intuitive sense of distance, being bidirectional. So, h(B, A) can potentially be a smaller (or larger) distance in contrast to h(A, B). This motivates the need for an absolute Hausdorff distance. $$H(A,B) = \max\{h(A, B), h(B, A)\}$$ By definition, H is symmetric. Again, by definition, H depends on h. Thus, to yield accurate distance values for H, we must be confident in our implementation and precision of computing h. For more mathematical explanation of the Hausdorff distance, please refer to this Wikipedia documentation and this YouTube video. Objects are geometrically complex and so too can their measurements be. There are different ways to compare meshes to each other via a range of geometry processing techniques and geometric properties. Distance is often a common metric of mesh comparison, but the conventional notion of distance is at times limited in scope. See Figures 1 and 2 below. Figure 1: This distance is limited to the red vertices, ignoring other points of the triangles. Figure 2: This distance ignores the spatial positions of the triangles. So, the distance is skewed to the points of the triangles, and not the contribution of the space between the triangles. Figures from the Hausdorff distance between convex polygons. Our research focuses on robustly (efficiently, for all possible cases) computing the Hausdorff distance h for pairs of triangular meshes of objects. The Hausdorff distance h is fundamentally a maximum distance among a set of desirable distances between 2 meshes. These desirable distances are minimum distances of all possible vectors resulting from points taken from the first mesh to the second mesh. Why is h significant? If h tends to 0, then this implies that our meshes, and the objects they represent, are very similar. Strong similarity indicates minimal change(s) from one mesh to the other, with possible dynamics being a slight deformation, rotation, translation, compression, stretch, etc. However, if h >> 0, then this implies that our meshes, and the objects they represent, are dissimilar. Weak similarity indicates significant change(s) from one mesh to the other, associated with the earlier dynamics. Intuitively, h depends on the strength of ideal correspondence from triangle to triangle, between the meshes. In summary, h serves as a means of calculating the similarity between triangle meshes by maximally separating the meshes according to the collection of all minimum pointwise-mesh distances. The Hausdorff distance can be used for a variety of geometry processing purposes. Primary domains include computer vision, computer graphics, digital fabrication, 3D-printing, and modeling, among others. Consider computer vision, an area vital to image processing, having a multitude of technological applications in our daily lives. It is often desirable to identify a best-candidate target relative to some initial mesh template. In reference to the set of points within the template, the Hausdorff distance can be computed for each potential target. The target with the minimum Hausdorff distance would qualify as being the best fit, ideally being a close approximation to the template object. The Hausdorff distance plays a similar role relative to other areas of computer science, engineering, animation, etc. See Figure 3 and 4, below, for a general representation of h. Figure 3: Hausdorff distance h corresponds to the dotted lined distance of the left image. In the right image, h is found in the black shaded region of the green triangle via the Branch and Bound Method. This image is found in Figure 1 of the initial reading provided by Dr. Leonardo Sacht. Figure 4: Hausdorff distance h corresponds to the solid lined distance of the left image. This distance is from the furthest "leftmost" point of the first mesh (armadillo) to the closest "leftmost" point of the second mesh. This image is found in Figure 5 of the initial reading provided by Dr. Leonardo Sacht. Branch and Bound Method Our goal was to implement the branch-and-bound method for calculating H. The main idea is to calculate the individual upper bounds of Hausdorff distance for each triangle meshes of mesh A and the common lower bound for the whole mesh A. If the upper bound of some triangle is greater than the general lower bound, then this face is discarded and the remaining ones are subdivided. So, we have these 3 main steps: 1. Calculating the lower bound We define the lower bound as the minimum of the distances from all the vertices of mesh A to mesh B. Firstly, we choose the vertex P on mesh A. Secondly, we compute the distances from point P to all the faces of mesh B. The actual distance from point P to mesh B is the minimum of the distances that were calculated the step before. For more theoretical details you should check this blog post: http://summergeometry.org/sgi2021/finding-the-lower-bounds-of-the-hausdorff-distance/ The implementation of this part: Calculating the distance from the point P to each triangle mesh T of mesh B is a very complicated process. So, I would like not to show the code and only describe it. The main features that should be considered during this computation is the position of point P relatively to the triangle T. For example, if the projection of point P on the triangle's plane lies inside the triangle, then the distance from point P to triangle is just the length of the corresponding normal vector. In the other cases it could be the distance to the closest edge or vertex of triangle T. During testing this code our team faced the problem: the calculating point-triangle distance takes too much time. So, we created the bounding-sphere idea. Instead of computing the point-triangle distance we decided to compute point-sphere distance, which is very simple. But what sphere should we pick? The first idea that sprung to our minds was the sphere that is generated by a circumscribed circle of the triangle T. But the computation of its center is also complicated. That is why we chose the center of mass M as the center of the sphere and maximal distance from M to each vertex of triangle T. So, if the distance from the point P to this sphere is greater than the actual minimum, then the distance to the corresponding triangle is exactly not the minimum. This trick made our code work approximately 30% faster. This is the realization: 2. Calculating the upper bounds Overall, the upper bound is derived by the distances between the vertices and triangle inequality. For more theoretical details you should check this blog post: http://summergeometry.org/sgi2021/upper-bound-for-the-hausdorff-distance/ During testing the code from this page on big meshes our team faced the memory problem. On the grounds that we tried to store the lengths that we already computed, it took too much memory. That is why we decided just compute these length one more time, even though it takes a little bit more time (it is not crucial): 3. Discarding and subdividing The faces are subdivided in the following way: we add the midpoints and triangles that are generated by the previous vertices and these new points. In the end, we have 4 new faces instead of the old one. For more theoretical details you should check this blog post: http://summergeometry.org/sgi2021/branch-and-bound-method-for-calculating-hausdorff-distance/ Below are our results for two simple meshes, first one being a sphere mesh and the second one being the simple mesh found in the blog post linked under the "Discarding and subdividing" section. Figure 5: Results for a sphere mesh with different tolerance levels Figure 6: Results for a smaller, simple mesh with different tolerance levels The intuition behind how to determine the Hausdorff distance is relatively simple, however the implementation of computing this distance isn't trivial. Among the 3 tasks of this Hausdorff distance algorithm (finding the lower bound, finding the upper bound, and finding a triangular subdivision routine), the latter two tasks were dependent on finding the lower bound. We at first thought that the subdivision routine would be the most complicated process, and the lower bound would be the easiest. We were very wrong: the lower bound was actually the most challenging aspect of our code. Finding vertex-vertex distances was the easiest aspect of this computation. Given that in MATLAB triangular meshes are represented as faces of vertex points, it is difficult to identify specific non-vertex points of some triangle. To account for this, we at first used computations dependent on finding a series of normals amongst points. This yielded a functional, albeit slow, algorithm. Upon comparison, the lower bounds computation was the cause of this latency and needed to be improved. At this point, it was a matter of finding a different implementation. It was fun brainstorming with each other about possible ways to do this. It was more fun to consider counterexamples to our initial ideas, and to proceed from there. At a certain point, we incorporated geometric ideas (using the centroid of triangles) and topological ideas (using the closed balls) to find a subset of relevant faces relative to some vertex of the first mesh, instead of having to consider all possible faces of the secondary mesh. Bryce's part was having to mathematically prove his argument, for it to be correct, but only to find out later it would be not feasible to implement (oh well). Currently, we are trying to resolve bugs, but throughout the entire process we learned a lot, had fun working with each other, and gained a stronger understanding of techniques used within geometry processing. In conclusion, it was really fascinating to see the connection between the theoretical ideas and the implementation of an algorithm, especially how a theoretically simple algorithm can be very difficult to implement. We were able to learn more about the topological and geometric ideas behind the problem as well as the coding part of the project. We look forward to finding more robust ways to code our algorithm, and learning more about the mathematics behind this seemingly simple measure of the geometric difference between two meshes. Tags Hausdorff distance Volume-encoded parameterization Post author By Alice Mehalek No Comments on Volume-encoded parameterization by Alice Mehalek, Marcus Vidaurri, and Zhecheng Wang UV mapping or UV parameterization is the process of mapping a 3D surface to a 2D plane. A UV map assigns every point on the surface to a point on the plane, so that a 2D texture can be applied to a 3D object. A simple example is the representation of the Earth as a 2D map. Because there is no perfect way to project the surface of a globe onto a plane, many different map projections have been made, each with different types and degrees of distortion. Some of these maps use "cuts" or discontinuities to fragment the Earth's surface. In computer graphics, the typical method of UV mapping is by representing a 3D object as a triangle mesh and explicitly assigning each vertex on the mesh to a point on the UV plane. This method is slow and must be repeated often, as the same texture map can't be used by different meshes of the same object. For any kind of parametrization or UV mapping, a good UV map must be injective and should be conformal (preserving angles) while having few cuts. Ideally it should also be isometric (preserve relative areas). In general, however, more cuts are needed to achieve less distortion. Our research mentor, Marco Tarini, developed the method of Volume-encoded UV mapping. In this method, a surface is represented by parametric functions and each point on the surface is mapped to a UV position as a function of its 3D position. This is done by enclosing the surface or portion of the surface in a unit cube, or voxel, and assigning UV coordinates to each of the cube's eight vertices. All other points on the surface can be mapped by trilinear interpolation. Volume-encoded parametrization has the advantage of only needing to store eight sets of UV coordinates per voxel, instead of unique locations of many mesh vertices, and can be applied to different types of surface representations, not just meshes. We spent the first week of our research project learning about volume-encoded parametrization by exploring the 2D equivalent: mapping 2D curves to a one-dimensional line, u. Given a curve enclosed within a unit square, our task was to find the u-value for each corner of the square that optimized injectivity, conformality, and orthogonality. We did this using a Least Squares method to solve a linear system consisting of a set of constraints applied to points on the surface. All other points on the curve could then be mapped to u by bilinear interpolation. A quarter circle (the red curve on the xy plane of the 3D plot) is mapped to a line, u, which is also represented as the height on the z axis in the 3D plot. The surface in the 3D plot is obtained by bilinear interpolation of the height of each corner of the square, and the red curve along the surface represents the path of the quarter circle mapped to 1D. Each point on the path has a unique height, indicating an injective mapping. An injective mapping was not possible for portions of a circle greater than a semicircle. Each point on the path of u does not have a unique height, indicating loss of injectivity. There is also distortion in the middle where the slope is variable. In Week 2 of the project, we moved on to 3D and created UV maps for portions of a sphere, using a similar method to the 2D case. Some of the questions we wanted to answer were: For what types of surfaces is volume-encoded parametrization possible? At what level of shape complexity is it necessary to introduce cuts, or split up the surface into smaller voxels? In the 2D case, we found that injectivity could be preserved when mapping curves less than half a circle, but there were overlaps for curves greater than a semicircle. One dimension up, when we go into the 3D space, we found that uniform sampling on the quarter sphere was challenging. Sampling uniformly in the 2D parametric space will result in a distorted sampling that becomes denser when getting closer to the north pole of the sphere. We tried three different approaches: rewriting the parametric equations, sampling in a unit-sphere, and sampling in the 3D target space and then projecting the sample points back to the surface. Unfortunately, all three methods only worked with a certain parametric sphere equation. When mapping a portion of a sphere to 2D, the grid allows us to see where the mapping is distorted. This mapping is most accurate at the north pole, while areas and angles both become distorted toward the equator. In conclusion, over the two weeks, we designed and tested a volume-encoded least-squares conformal mapping in both 2D and 3D. In the future, we plan to rewrite the code in C++ and run more experiments. Intrinsic Parameterization: Weeks 1-2 Post author By Tal No Comments on Intrinsic Parameterization: Weeks 1-2 By Joana Portmann, Tal Rastopchin, and Sahra Yusuf. Mentored by Professor Keenan Crane. Intrinsic parameterization During these last two weeks, we explored intrinsic encoding of triangle meshes. As an introduction to this new topic, we wrote a very simple algorithm that lays out a triangle mesh flat. We then improved this algorithm via line search over a week. In connection with this, we looked into terms like 'angle defects,' 'cotangent weights,' and the 'cotangent Laplacian' in preparation for more current research during the week. From intrinsic to extrinsic parameterization To get a short introduction into intrinsic parameterization and its applications, I quote some sentences from Keenan's course. If you're interested in the subject I can recommend the course notes "Geometry Processing with Intrinsic Triangulations." "As triangle meshes are a basic representation for 3D geometry, playing the same central role as pixel arrays in image processing, even seemingly small shifts in the way we think about triangle meshes can have major consequences for a wide variety of applications. Instead of thinking about a triangle as a collection of vertex positions in \(\mathbb{R}^n\) from the intrinsic perspective, it's a collection of edge lengths associated with edges." Many properties of a surface such as the shortest path do only depend on local measurements such as angles and distances along the surfaces and do not depend on how the surface is embedded in space (e.g. vertex positions in \(\mathbb{R}^n\)), so an intrinsic representation of the mesh works fine. Intrinsic triangulations bring several deep ideas from topology and differential geometry into a discrete, computational setting. And the framework of intrinsic triangulations is particularly useful for improving the robustness of existing algorithms. Laying out edge lengths in the plane Our first task was to implement a simple algorithm that uses intrinsic edge lengths and a breadth-first search to flatten a triangle mesh onto the plane. The key idea driving this algorithm is that given just a triangle's edge lengths we can use the law of cosines to compute its internal angles. Given a triangle in the plane we can use the internal angles to flatten out its neighbors into the plane. We will later use these angles to modify the edge lengths so that we "better" flatten the model. The algorithm works by choosing some root triangle and then performing a breadth-first traversal to flatten each of the adjacent triangles into the plane until we have visited every triangle in the mesh. Breadth-first search pseudocode Some initial geometry central pseudocode for this breadth first search looks something like // Pick and flatten some starting triangle Face rootTriangle = mesh->face(0); // Calculate the root triangle's flattened vertex positions calculateFlatVertices(rootTriangle); // Initialize a map encoding visited faces FaceData<bool> isVisited(*mesh, false); // Initialize a queue for the BFS std::queue<Face> visited; // Mark the root triangle as visited and pop it into the queue isVisited[rootTriangle] = true; visited.push(rootTriangle); // While our queue is not empty while(!visited.empty()) // Pop the current Face off the front of the queue Face currentFace = visited.front(); visited.pop(); // Visit the adjacent faces For (Face adjFace : currentFace.adjacentFaces()) { // If we have not already visited the face if (!visited[adjFace] // Calculate the triangle's flattened vertex positions calculateFlatVertices(adjFace); // And push it onto the queue visited.push(adjFace); In order to make sure we lay down adjacent triangles with respect to the computed flattened plane coordinates of their parent triangle we need to know exactly how a child triangle connects to its parent triangle. Specifically, we need to know which edge is shared by the parent and child triangle and which point belongs to the child triangle but not the parent triangle. One way we could retrieve this information is by computing the set difference between the vertices belonging to the parent triangle and child triangle, all while carefully keeping track of vertex indices and edge orientation. This certainly works, however, it can be cumbersome to write a brute force combinatorial helper method for each unique mesh element traversal routine. The halfedge mesh data structure Professor Keenan Crane explained that a popular mesh data structure that allows a scientist to conveniently implement mesh traversal routines is that of the halfedge mesh. At its core a halfedge mesh encodes the connectivity information of a combinatorial surface by keeping track of a set of halfedges and the two connectivity functions known as twin and next. Here the set of halfedges are none other than the directed edges obtained from an oriented triangle mesh. The twin function takes a halfedge and brings it to its corresponding oppositely oriented twin halfedge. In this fashion, if we apply the twin function to some halfedge twice we get the same initial halfedge back. The next function takes a halfedge and brings it to the next halfedge in the current triangle. In this fashion if we take the next function and apply it to a halfedge belonging to a triangle three times, we get the same initial halfedge back. A diagram depicting the halfedge data structure connectivity relationships. Source. Professor Keenan Crane has a well written introduction to the halfedge data structure in section 2.5 of his course notes on Discrete Differential Geometry. It turns out that geometry central uses the halfedge mesh data structure and so we can rewrite the traversal of the adjacent faces loop to more easily retrieve our desired connectivity information. In the geometry central implementation, every mesh element (vertex, edge, face, etc.) contains a reference to a halfedge, and vice versa. // Get the face's halfedge Halfedge currentHalfedge = currentFace.halfedge(); // Get the current adjacent face Face adjFace = currentHalfedge.twin().face(); if (!isVisited[adjFace]) // Retrieve our desired vertices Vertex a = currentHalfedge.vertex(); Vertex b = currentHalfedge.next().vertex(); Vertex c = currentHalfedge.twin().next().next().vertex(); calculateFlatVertices(a, b, c); // Iterate to the next halfedge currentHalfEdge = currentHalfEdge.next(); // Exit the loop when we reach our starting halfedge while (currentHalfEdge != currentFace.halfedge()); Here's a diagram illustrating the relationship between the currentHalfedge and vertices a, b, and c. A diagram illustrating the connectivity relationship between the currentHalfedge and vertices a, b, and c. Note that cH abbreviates currentHalfedge. Segfaults, debugging, and ghost faces This all looks great right? Now we need to determine the specifics of calculating the flat vertices? Well, not exactly. When we were running a version of this code in which we attempted to visualize the resulting breadth-first traversal we encountered several random segfaults. When Sahra ran a debugger (shout out to GDB and LLDB 🥰) we learned that the culprit was the isVisited[adjFace] function call on the line We were very confused as to why we would be getting a segfault while trying to retrieve the value corresponding to the key adjFace contained in the map FaceData<bool> isVisited. Sahra inspected the contents of the adjFace object and observed that it had index 248 whereas the mesh we were testing the breadth-first search on only had 247 faces. Because C++ zero indexes its elements, this means we somehow retrieved a face with index out of range by 2! How could this have happened? How did we retrieve that face in the first place? Looking at the lines we realized that we made an unsafe assumption about currentHalfedge. In particular, we assumed that it was not a boundary edge. What does the twin of a boundary halfedge that has no real twin look like? If the issue we were running to was that the currentHalfedge was a boundary halfedge, why didn't we get a segfault on simply currentHalfedge.twin()? Doing some research, we found that the geometry central internals documentation explains that "We can implicitly encode the twin() relationship by storing twinned halfedges adjacent to one another– that is, the twin of an even-numbered halfedge numbered he is he+1, and the twin of and odd-numbered halfedge is he-1." Geometry central internals documentation Aha! This explains exactly why currentHalfedge.twin() still works on a boundary halfedge; behind the scenes, it is just adding or subtracting one to the halfedge's index. Where did the face index come from? We're still unsure, but we realized that the face currentHalfedge.twin().face() only makes sense (and hence can only be used as a key for the visited map) when currentHalfedge is not a boundary halfedge. Here is a diagram of the "ghost face" that we think the line Face adjFace = currentHalfedge.twin().face() was producing. A diagram depicting how taking the face of the twin of a boundary halfedge produces a nonexistent face. Changing the map access line in the if statement to if (!currentHalfedge.edge().isBoundary() && !isVisited[adjFace]) resolved the segfaults and produced a working breadth-first traversal. Conformal parameterization Here is a picture illustrating applying the flattening algorithm to a model of a cat head. A picture illustrating the application of the flattening algorithm to a model of a cat head. You can see that there are many cracks and this is because the model of the cat head is not flat—in particular, it has vertices with nonzero angle defect. A vertex angle defect for a given vertex is equal to the difference between \(2 \pi\) and the sum of the corner angles containing that vertex. This is a good measure of how flat a vertex is because for a perfectly flat vertex, all angles surrounding it will sum to \(2 \pi\). After laying out the edges on the plane \(z=0\), we began the necessary steps to compute a conformal flattening (an angle-preserving parameterization). In order to complete this step, we needed to find a set of new edge lengths which would both be related to the original ones by a scale factor and minimize the angle defects, \(l_{ij} := \sqrt{ \phi_i \phi_j} l_{ij}^0, \quad \forall ij \in E\), where where \(l_{ij}\) is the new intrinsic edge length, \(\phi_i, \phi_j\) are the scale factors at vertices \(i, j\), and \(l_{ij}^0\) is the initial edge length. Discrete Yamabe flow At this stage, we have a clear goal: to optimize the scale factors in order to scale the edge lengths and minimize the angle defects across the mesh (i.e. fully flatten the mesh). From here, we use the discrete Yamabe flow to meet both of these requirements. Before implementing this algorithm, we began by substituting the scale factors with their logarithms \(l_{ij} = e^{(u_i + u_j)/2} l_{ij}^0\), where \(l_{ij}\) is the new intrinsic edge length, \(u_i, u_j\) are the scale factors at vertices \(i, j\), and \(l_{ij}^0\) is the initial edge length. This ensures the new intrinsic edge lengths are always positive and that the optimization is convex. To implement the algorithm, we followed this procedure: 1. Calculate the initial edge lengths 2. While all angle defects are below a certain threshold epsilon: Compute the scaled edge lengths Compute the current angle defects based on the new interior angles based on the scaled edge lengths Update the scale factors using the step and the angle defects: \(u_i \leftarrow u_i – h \Omega _i\), where \(u_i\) is the scale factor of the \(i\)th vertex, \(h\) is the gradient descent step size, and \(\Omega_i\) is the intrinsic angle defect at the \(i\)th vertex. After running this algorithm and displaying the result, we found that we were able to obtain a perfect conformal flattening of the input mesh (there were no cracks!). There was one issue, however: we needed to manually choose a step size that would work well for the chosen epsilon value. Our next step was to extend our current algorithm by implementing a backtracking line search which would change the step size based on the energy. Here are two videos demonstrating the Yamabe flow algorithm. The first video illustrates how each iteration of the flow improves the flattened mesh and the second video illustrates how that translates into UV coordinates for texture mapping. We are really happy with how these turned out! A video visualizing intermediate 2D parameterizations produced by the Yamabe flow. A video visualizing the intermediate UV coordinates on the cat head mesh produced by the Yamabe flow Line search To implement this, we added a sub-loop to our existing Yamabe flow procedure which repeatedly test smaller step sizes until one is found which will decrease the energy, e.g., backtracking line search. A good resource on this topic is Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. After resolving a bug in which our step size would stall at very small values with no progress, we were successful in implementing this line search. Now, a large step size could be given without missing the minima. Next, we worked on using Newton's method to improve the descent direction by changing the gradient to more easily reach the minimum. To complete this, we needed to calculate the Hessian — in this case, the Hessian of the discrete conformal energy was the cotan-Laplace matrix \(L\). This matrix had square dimensions (the number of rows and the number of columns was equal to the number of interior vertices) and has off-diagonal entries: \(L_{ij} = -\frac{1}{2} (\cot \theta_k ^{ij} + \cot \theta _k ^{ji})\) for each edge \(ij\), as well as diagonal entries \(L_{ii} = – \sum _{ij} L_{ij}\) for each edge \(ij\) incident to the \(i\)th vertex. The newton's descent algorithm is as follows: First, build the cotan-Laplace matrix L based on the above definitions Determine the descent direction \(\dot{u} \in \mathbb{R}^{|V_{\text{int}}|}\), by solving the system \(L \dot{u} = – \Omega\) with \(\Omega \in \mathbb{R}^{|V_{\text{int}}|}\).as the vector containing all of the angle defects at interior vertices. Run line search again, but with \(\dot{u}\) replacing \(– \Omega\) as the search direction. This method yielded a completely flat mesh with no cracks. Newton's method was also significantly faster: on one of our machines, Newton's method took 3 ms to compute a crack-free parameterization for the cat head model while the original Yamabe flow implementation took 58 ms. Tags intrinsic geometry, newton's method, parameterization Branch-and-bound method for calculating Hausdorff distance 1 Comment on Branch-and-bound method for calculating Hausdorff distance This week I worked on the "Robust computation of the Hausdorff distance between triangle meshes" project with our mentor Dr. Leonardo Sacht, TA Erik Amezquita and my team Talant Talipov and Bryce Van Ross. We started our project by doing some initial reading about the topic. Hausdorff distance from triangle meshes A to B is defined as $$h(A,B) = \max_{p \in A}d(p,B)$$ where d is the Euclidean norm. Finding the Hausdorff distance between two triangle meshes is one way of comparing these meshes. We note that the Hausdorff distance from A to B might be different from the Hausdorff distance from B to A, as you can see in the figure below, so it is important to distinguish which one is being computed. Figure 1: Hausdorff distance from Mesh A to Mesh B Figure 2: Hausdorff distance from Mesh B to Mesh A Finally we define $$H(a,b) = \max{h(A,B), h(B,A)}$$ and use this value that is symmetric when comparing triangle meshes A and B. Our first task was to implement Branch and Bound methods for calculating this distance. Suppose we want to calculate the Hausdorff distance from mesh A to mesh B. There were three main steps in the algorithm to do this: calculation of the upper and lower bound for the Hausdorff distance, and discarding and subdividing triangles according to the values of the upper and lower bound. The upper bound for each triangle in A is calculated by taking the maximum of the distances from the given triangle to every vertex in B. The lower bound is calculated over A by taking the minimum of the distances from each vertex p in A to the triangle in B that is closest to p. If a triangle in A has an upper bound that is less than the lower bound, the triangle is discarded. After the discarding process, the remaining triangles are subdivided into four triangles and the process is repeated with recalculation of the upper bounds and the lower bound. The algorithm is run until the values for the upper and lower bound are within some ε of each other. Ultimately, we get a triangle region that contains the point that realizes the Hausdorff distance from A to B. To implement this method, my teammates tackled the upper and lower bound codes while I wrote an algorithm for the discarding and subdividing process: We ran this algorithm with testing values u = [1;2;3;4;5] and l = 3 and got these results: Figure 3: Initial mesh Figure 4: After discarding and subdividing As expected, the two triangles that had the upper bound of 1 and 2 were discarded and the rest were subdivided. The lower bound algorithm turned out to be more challenging than we anticipated, and we worked together to come up with a robust method. Currently, we are working on finishing up this part of the code, so that we can run and test the whole algorithm. After this, we are looking forward to extending this project in different directions, whether it is a more theoretical analysis of the topic or working on improving our algorithm! Bayesian Rotation Synchronization Post author By Adrish No Comments on Bayesian Rotation Synchronization By Adrish Dey, Dorothy Najjuma Kamya, and Lily Kimble Note: Although this is an ongoing work, this report documents our progress between the official 2 weeks of the project. (August 2, 2021 – August 13, 2021) The past 2 weeks at SGI, we have been working with David Palmer on investigating a novel Bayesian approach towards the angular synchronization problem. This blog post is written to document our work and share a sneak peek into our research. Consider a set of unknown absolute orientations \(\{q_1, q_2, \ldots, q_n\}\) with respect to some fixed basis. The problem of angular synchronization deals with the accurate estimation of these orientations from noisy observations of their relative offsets \(O_{i, j}\), up to a constant additive phase. We are particularly interested in estimating these "true" orientations in the presence of many outlier measurements. Our interest in this topic stems from the fact that the angular synchronization problem arises in various avenues of science, including reconstruction problems in computer vision, ranking problems, sensor network localization, structural biology, etc. In our work, we study this problem from a Bayesian perspective, by modelling the observed data as a mixture between noisy observations and outliers. We also investigate the problem of continuous label switching, a global ambiguity that arises from the lack of knowledge about the basis of the absolute orientations \(q_i\). Finally, we experiment on a novel Riemannian gradient descent method for alleviating this continuous label switching problem and provide our observations herein. Brief Primer on Bayesian Inference Before going deeper, we'll briefly discuss Bayesian inference. At the heart of Bayesian inference lies the celebrated Bayes' rule (read \(a|b\) as "a given b"): \[\underbrace{P(q | O)}_{\textrm{posterior}} = \frac{\overbrace{P(O|q)}^{\textrm{likelihood}} \cdot \overbrace{P(q)}^{\textrm{prior}}}{\underbrace{\int\limits_q P(O|q)\cdot P(q)}}_{\textrm{evidence}}\] In our problem, \(q\) and \(O\) denote the true orientations that we are estimating and the noisy observations with outliers respectively. We are interested in finding the posterior distribution (or at least samples from it) over the ground truth \(q\) given the noisy observations \(O\). The denominator (i.e., the evidence or partition function) is an integral over all \(q\)s. Exactly evaluating this integral is often intractable if \(q\) lies on some continuous manifold, as in our problem. This makes drawing samples from the posterior becomes hard. One way to avoid computing the partition function is a sampling method called Markov Chain Monte Carlo (MCMC). Intuitively, the posterior is approximated by a markov chain whose transitions can be computed using a simpler distribution called the proposal distribution. Successive samples are then accepted or rejected based on an acceptance probability designed to ensure convergence to the posterior distribution in the limit of infinite samples. Simply put, after enough samples are drawn using MCMC, they will look like the samples from the posterior\(P(q|O) \propto P(O|q) \cdot P(q)\) without requiring us to calculate the intractable normalization \(P(O) = \int\limits_q P(O|q)\cdot P(q)\). In our work, we use Hamiltonian Monte Carlo (HMC), an efficient variant of MCMC, which uses Hamiltonian Dynamics to propose the next sample. From an implementation perspective, we use the built-in HMC sampler in Stan for drawing samples. Mixture Model As mentioned before, we model the noisy observation as a mixture model of true distribution and outliers. This is denoted by (Equation 1): O_{i, j} = \begin{cases} q_i q_j^T + \eta_{i, j} & \textrm{with prob. } p \\ \textrm{Uniform}(\textrm{SO}(D)) & \textrm{with prob. } 1 – p \end{cases} where \(\eta_{i j}\) is the additive noise to our true observation, \(q_i q_j^T\) is the relative orientation between the \(i^\textrm{th}\) and \(j^\textrm{th}\) objects, and \(\textrm{Uniform}(\textrm{SO}(D))\) is the uniform distribution over the rotation group \(\textrm{SO}(D)\), representing our outlier distribution. \(\textrm{SO}(D)\) is the space where every element represents a D-dimensional rotation. This mixture model serves as the likelihood \(P(O|q)\) for our Bayesian framework. Sampled Result Ground truth samples of \(q_i\) \(\hat{q}_i \sim p(q|O)\) (estimated \(q_i\)) sampled from our posterior \(p(q|O)\) The orientations \(\hat{q}_i\) sampled from the posterior look significantly rotated with respect to the original samples. Notice this is a global rotation since all the samples are rotated equally. This problem of global ambiguity of absolute orientations \(q_i\) arises from the fact that the relative orientations \(q_i q_j^T\) and \(\tilde{q}_i \tilde{q_j}^T\) of two different set of vectors can be the same even if the absolute orientations are different. The following section goes over this and provides a sneak peek into our solution for alleviating this problem. Continuous Label Switching A careful observation of our problem formulation (Equation 1) would reveal the problem is invariant to transformation of the absolute orientations as long as the relative orientations are preserved. Consider the 2 pairs of observations in the figure below. (Blue and Red; Yellow and Green) Let the absolute orientations be \(q_1\), \(q_2\), \(\tilde{q}_1\) and \(\tilde{q}_2\) and relative orientations between the pairs be \(R_{12}\) and \(\tilde{R}_{12}\). As the absolute orientations \(q_1\) and \(q_2\) are equally rotated by a rotation matrix \(A\), the relative orientation between them \(R_{12}\) is preserved. More formally, Let \(A \in \textrm{SO}(D)\) be a random orientation matrix in D-dimensions. The following equation demonstrates how rotating two absolute orientations \(q_i\) and \(q_j\) by a rotation matrix \(A\) preserves the relative orientations — which in turn gives rise to a global ambiguity in our framework. R_{ij} = q_i q_j = q_i A A^T q_j = (q_i A) (A^T q_j) = \tilde{q_i} \tilde{q_j} Since our inputs to the model are relative orientations, this ambiguity (known as label switching) causes our Bayesian estimates to come randomly rotated by some rotation \(A\). Proposed Solution Based on Monteiller et al., in this project we explored a novel solution for alleviating this problem. The intuition is that we believe the unknown ground truth is close to the posterior samples up to a global rotation. Hence we try to approximate the ground truth by starting out with a random guess and optimizing for the alignment map between the estimate and the ground truth. Using this alignment map, and the posterior samples, we iteratively update the guess, using a custom Riemannian Stochastic Gradient Descent over \(\textrm{SO}(D)\). Start with a random guess \(\mu \sim \mathrm{Uniform}(\mathrm{SO}(D))\). Sample \(\hat{q} \sim P(q | O)\), where \(P(q | O)\) is the posterior. Find the global ambiguity \(R\), between \(\hat{q}\) and \(\mu\). This can be obtained by solving for \(\mathrm{argmin}_R \, \| \mu – R \hat{q}\|_\mathrm{F}\). Move \(\mu\) along the shortest geodesic toward \(\hat{q}\). Repeat Steps 2 – 4 until convergence. Convergence is detected by a threshold on geodesic distance. We use this method to estimate the mean of the posterior over \(\mathrm{SO}(2)\) and plot the results (i.e. 2D orientations) in the complex plane as shown below. Original Sample Sampled Posterior Optimized Mean Posterior The proposed optimization proceedure is able to successfully re-align the posterior samples by alleviating the continuous label switching problem. In conclusion, we study the rotation synchronisation problem from a Bayesian perspective. We explore a custom Riemannian Gradient Descent procedure and perform experiments in the \(\mathrm{SO}(2)\) case. The current method is tested on a simple toy dataset. As future work we are interested in improving our Bayesian model and benchmarking it against the current state-of-the-art. There are certain performance bottlenecks in our current architecture, which constrain us to test only on \(\mathrm{SO}(2)\). In the future, we are also interested in carrying out experiments more thoroughly in various dimensions. While the current MCMC procedure we are using does not account for the non-Euclidean geometry of the space of orientations, \(\mathrm{SO}(D)\), we are looking into replacing it with Riemannian versions of MCMC. Tags label switching, research project Math SGI research projects Minimal Surfaces, But Periodic 1 Comment on Minimal Surfaces, But Periodic By Zhecheng Wang, Zeltzyn Guadalupe Montes Rosales, and Olga Guțan Note: This post describes work that has occurred between August 9 and August 20. The project will continue for a third week; more details to come. For the past two weeks we had the pleasure of working with Nicholas Sharp, Etienne Vouga, Josh Vekhter, and Erik Amezquita. We learned about a special type of minimal surfaces: triply-periodic minimal surfaces. Their name stems from their repeating pattern. Broadly speaking, a minimal surface minimizes its surface area. This is equivalent to having zero mean curvature. A triply-periodic minimal surface (TPMS) is a surface in \(\mathbb{R}^{3}\) that is invariant under a rank-3 lattice of translations. Figure 1. (Left) A Minimal Surface [source] and (right) a TPMS [source]. Let's talk about nonmanifold surfaces. "Manifold" is a geometry term that means: every local region of the surface looks like the plane (more formally — it is homeomorphic to a subset of Euclidean space). Non-manifold then allows for parts of the surface that do not look like the plane, such as T-junctions. Within the context of triangle meshes, a nonmanifold surface is a surface where more than 3 faces share an edge. II. What We Did First, we read and studied the 1993 paper by Pinkhall and Polthier that describes the algorithm for generating minimal surfaces. Our next goal was to generate minimal surfaces. Initially, we used pinned (Dirichlet) boundary conditions and regular manifold shapes. After ensuring that our code worked on manifold surfaces, we tested it on non-manifold input. Additionally, our team members learned how to use Blender. It has been a very enjoyable process, and the work was deeply satisfying, because of the embedded mathematical ideas intertwined with the artistic components. III. Reading the Pinkall Paper "Computing Discrete Minimal Surfaces and Their Conjugates," by Ulrich Pinkall and Konrad Polthier, is the classic paper on this subject; it introduces the iterative scheme we used to find minimal surfaces. Reading this paper was our first step in this project. The algorithm that finds a discrete locally area-minimizing surface is as follows: Take the initial surface \(M_0\) with boundary \(∂M_0\) as first approximation of M. Set i to 0. Compute the next surface \(M_i\) by solving the linear Dirichlet problem $$ \min_{M} \frac{1}{2}\int_{M_{i}}|\nabla (f:M_{i} \to M)|^{2}$$ Set i to i+1 and continue with step 2. The stopping condition is \(|\text{area}(M_i)-\text{area}(M_{i+1})|<\epsilon\). In our case, we used a maximum number of iterations, set by the user, as a stopping condition. There are additional subtleties that must be considered (such as "what to do with degenerate triangles?"), but since we did not implement them — their discussion is beyond the scope of this post. IV. Adding Periodic Boundary Conditions This is, at its core, an optimization problem. To ensure that the optimization works, the boundary conditions have to be periodic instead of fixed in space. This is because we are enforcing a set of boundary conditions on periodic shapes — that is, tiling in a 3D space. IV(a). Matching Vertices First, we need to check every two pairs of vertices in the mesh. We are looking to see if they have identical coordinates in two dimensions, but are separated by exactly two units in the third dimension. When we find such pairs of vertices, we classify them to \(G_x\), \(G_y\), or \(G_z\). Note that we only store unique pairs. IV(b). Laplacian Smoothness at the Boundary Vertices Instead of using the discrete Laplacian, now we introduce a sparse matrix K to adjust our smoothness term based on the new boundary: $$\min_{x}x^T(L^TK^TKL)x \text{ s.t. } x[b] = x_0[b].$$ Next, we construct the matrix K, which is a sparse square matrix of dimension #vertices by #vertices. To do so, we set \(K(i,i) = 1\), \(K(i,j) = 1\), and set the \(j\)th row entries to 0 for every pair of unique matched boundary vertices. For every interior vertex \(i\), we set \(K(i,i) = 1\). IV(c). Aligning Boundary Vertices Now we no longer want to pin boundary vertices to their original location in space. Instead, we want to allow our vertices to move, while the opposite sides of the boundary still match. To do that, we need to adjust the existing constraint term and to include additional linear constraints \(Ax=b\). Therefore, we add two sets of linear constraints to our linear system: For any pair of boundary vertices distanced by 2 units in one direction, the new coordinates should differ by 2 units. For any pair of boundary vertices matched in the two other directions, the new coordinates should differ by 0. We construct a selection matrix \(A\), which is a #pairs of boundary vertices in \(G\) by #vertices sparse matrix, to get the distance between any pair of boundary vertices. For every \(r\)th row, \(A(r,i)=1, A(r,j)=-1.\)$ Then we need to construct 3 \(b\) vectors, each of which is a sparse square matrix of the size [# of vector pairs of boundary vertices in G for a 3D coordinate (x,y,z)]. Based on whether at one given moment we are working with \(G_x\), \(G_y\), or \(G_z\), we enter \(2\) for those selected pairs, and \(0\) for the rest. V. Correct Outcomes Below, we can see the algorithm being correctly implemented. Each video represents a different mesh. VI. Aesthetically Pleasing Bugs Nothing is perfect, and coding in Matlab is no exception. We went through many iterations of our code before we got a functional version. Below are some examples of cool-looking bugs we encountered along the way, while testing the code on (what has become) one of our favorite shapes. Each video represents a different bug applied onto the same mesh. VII. Conclusion and Future Work Further work may include studying the physical properties of nonmanifold TPMS. It may also include additional basic structural simulations for physical properties, and establishing a comparison between the results for nonmanifold surfaces and the existing results for manifold surfaces. Additional goals may be of computational or algebraic nature. For example, one can write scripts to generate many possible initial conditions, then use code to convert the surfaces with each of these conditions into minimal surfaces. An algebraic goal may be to enumerate all possible possibly-nonmanifold structures and perhaps categorize them based on their properties. This is, in fact, an open problem. The possibilities are truly endless, and potential directions depend on the interests of the group of researchers undertaking this project further. Elastic curves and active bending Post author By Natasha No Comments on Elastic curves and active bending By: Judy Chiang, Natasha Diederen, Erick Jimenez Berumen Project mentor: Christian Hafner Architects often want to create visually striking structures that involve curved materials. The conventional way to do this is to pre-bend materials to form the shape of the desired curve. However, this generates large manufacturing and transportation costs. A proposed alternative to industrial bending is to elastically bend flat beams, where the desired curve is encoded into the stiffness profile of the beam. Wider sections of the beam will have a greater stiffness and bend less than narrower sections of the beam. Hence, it is possible to design an algorithm that enables a designer or architect to plan curved structures that can be assembled by bending flat beams. This is a topic currently being explored by Bernd Bickel and his student Christian Hafner (our project mentor). A model created using the concept of active bending (source) For the flat beam to remain bent as the desired curve, we must ensure that the beam assumes this form at its equilibrium point. In Lagrangian mechanics, this occurs when energy is minimised, since this implies that there is no other configuration of the system which would result in lower overall energy and thus be optimal instead. Two different questions arise from this formulation. First, given the stiffness profile \(K\), what deformed shapes \(\gamma\) can be generated? Second, given a curve \(\gamma\), what stiffness profiles \(K\) can be generated? In answering the first problem we will find the curve that will result from bending a beam with a given width profile. However, we are more interested in finding the stiffness profile of a beam which will result in a curved shape \(\gamma\) of our choice. Hence, we wish to solve the second problem. Below, we will give a full formulation of our main problem, and discuss how we transformed this into MATLAB code and created a user interface. We will conclude by discussing a more generalised case involving joint curves. Problem formulation To recap, the problem we want to solve is, given a curve \(\gamma \), what stiffness profiles \(K\) can be generated? For each point \(s\) on our curve \(\gamma: (0,\ell) \to \mathbb{R}^2\), we have values for the position \(\gamma(s)\), turning angle \(\alpha(s)\), and signed curvature \(\alpha'(s)\). In addition, we define the energy of the beam system to be \[W[\alpha] := \int_0^\ell K(s)\alpha'(s)^2 ds,\] where \(K(s)\) is the stiffness at point \(s\). At the equilibrium state of the system, \(W[\alpha]\) will take a minimum value. Intuitively, this implies that, in general, regions of larger curvature will have a lower stiffness. However, it is not true that two different points of equal curvature will have equal stiffness, since there are other factors at play. Now, before finding the \(K\) that minimises \(W[\alpha]\), we must set additional constraints \[\alpha(0) = \alpha_0, \quad \alpha(\ell) = \alpha_\ell\] and \[\gamma(0) = \gamma_0, \quad \gamma(\ell) = \gamma_\ell.\] In words, we are fixing the turning angles (tangents) and positions of the two boundary points. These constraints are necessary, since they dictate the position in space where the curve begins and ends, as well as the initial and final directions the curve moves in. We have now formulated a problem that can be solved using variational calculus. Without going into detail, we find that stationary points of this function are given by the equation \[K \kappa = a + \langle b, \gamma \rangle,\] where \(\kappa\) is the signed curvature (previously \(\alpha'\)), and \(a \in \mathbb{R}\) and \(b \in \mathbb{R}^2\) are constants to be found. However, a stiffness profile cannot be generated for all curves \(\gamma\). More specifically, it was shown by Bernd Bickel and Christian Hafner that a solution exists if and only if there exists a line \(L\) that intersects \(\gamma\) exactly at its inflections. With this information in hand, we can begin to create a linear program that computes the stiffness function \(K\). The top four curves can be created using active bending, but the bottom four cannot (source) Creating a linear programme In order to find the stiffness \(K\), we need to solve for the constants \(a\) and \(b\) in the equation \( K \kappa = a + \langle b, \gamma \rangle \), which can be discretised to \[ K(s_i) = \frac{a + \langle b, \gamma(s_i) \rangle }{\kappa(s_i)}.\] It is possible that there is more than one solution to \(K\), so we want some way to determine the "best" stiffness profile. If we think back to our original problem, we want to ensure that our beam is maximally uniform, since this is good for structural integrity. Hence, we solve for \(a\) and \(b\) in the above inequality in such a way that the ratio between maximum and minimum stiffness is minimised. To do this we set the minimum stiffness to an arbitrary value, for example 1, and then constrain \(K\) between 1 and \(M = \text{min}_i K_i\), thus obtaining the inequality \[1 \leq \frac{a + \langle b, \gamma(s_i) \rangle }{\kappa(s_i)} \leq M.\] This can be solved using MATLAB's linprog function (read the documentation here). In this case, the variables we want to solve for are \(a \in \mathbb{R}\), \(b \in \mathbb{R}^2\), and \(M\) (a scalar we want to minimise). So, using the linprog documentation \(x\) is the vector \((a,b_1,b_2,M)\) and \(f\) is the vector \((0,0,0,1)\), since \(M\) is the only variable we want to minimise. Since linprog only deals with inequalities of the form \(A \cdot x \leq b\), we can split the above inequality into two and write it in terms of the elements of \(x\), like so: \[-\Bigg(\frac{1}{\kappa(s_i)}a + \frac{\gamma_x(s{i})}{\kappa(s_i)}b_1 + \frac{\gamma_y(s_{i})}{\kappa(s_i)}b_2 + 0\Bigg) \leq -1,\] \[\frac{1}{\kappa(s_i)}a + \frac{\gamma_x(s_{i})}{\kappa(s_i)}b_1 + \frac{\gamma_y(s_{i})}{\kappa(s_i)}b_2 -M \leq 0.\] These two equations are of a form that can be easily written into linprog to obtain the values of \(a\), \(b\) and \(M\) and hence \(K\). Once we solved the linear program outlined above, we created a user interface in MATLAB that would allow users to draw and edit a spline curve and see the corresponding elastic strip created in real time. Custom splines can be imported as a .txt file in the following format or alternatively, the file that is already in the folder can be used. Users can then run the user interface and edit the spline in real time. To add points, simply shift-click, and a new point will be added at the midpoint between the selected point and the next point. The user can right-click to delete a point, and left-click and drag to move points around. If there are zero or one control points remaining, then the user can add a new point where their mouse cursor is by shift-clicking. The number of control points must be greater than or equal to the degree plus one for the spline to be formed. There are certain cases in which the linear program cannot be solved. In these cases the elastic strip is not plotted, and the user must move the control points around until it is possible to create a strip. Demonstration of the user interface Here is the link to the GitHub repository, for those who want to try the user interface out. Joints Between Two or More Strips The current version of our code is able to generate elastic strips for any (feasible) spline curve generated by the user. However, an as of yet unsolved problem is the feasibility condition for a pair of elastic strips with joints. Solving this problem would allow us compute a pair of stiffness functions \(K_1\) and \(K_2\) that yield elastic strips that can be connected via slots at the fixed joint locations. With a bit of maths we were able to derive the equilibrium equations that would produce such stiffness functions. Suppose the two spline curves are given by \(\gamma_{1}: [0,\ell_{1}] \to \mathbb{R}^2\) and \(\gamma_{2}: [0,\ell_{2}] \to \mathbb{R}^2\) and that they intersect at exactly one point such that \(\gamma_{1}(t_a) = \gamma_{2}(t_b)\). Then we must solve the following two equations: \[ K_{1}(t) \kappa_{1}(t) = \left \{ \begin{array}{ll} a_1 + \langle b_1, \gamma_{1}(t) \rangle + \langle c, \gamma_{1}(t) \rangle & 0 \leq t \leq t_a \\ a_1 + \langle b_1, \gamma_{1}(t) \rangle + \langle c, \gamma_{1}(t_a) \rangle & t_a < t \leq \ell_{1} \end{array}\right. \] \[ K_{2}(t) \kappa_{2}(t) = \left \{ \begin{array}{ll} a_2 + \langle b_2, \gamma_{2}(t) \rangle – \langle c, \gamma_{2}(t) \rangle & 0 \leq t \leq t_b \\ _2 + \langle b_2, \gamma_{2}(t) \rangle – \langle c, \gamma_{2}(t_b) \rangle & t_b < t \leq \ell_{2} \end{array}\right. \] Similar to the one-spline case, these can be translated into a linear program problem which can be solved. Due to the time constraint of this project, we were not able to implement this into our code and build an associated user interface for creating multiple splines. Furthermore, above we have only discussed the situation with two curves and one joint. Adding more joints would increase the number of unknowns and add more sections to the above piece-wise-defined functions. Lastly, we are still lacking a geometric interpretation for the above equations. All of these issues regarding the extension to two or more spline curves would serve as great inspiration for further research! Tags Active bending, Curves, Elastic curves © 2023 SGI 2021
CommonCrawl
Why does the same limit work in one case but fail in another? The following questions has been bugging me since high-school calculus. Please help me find my peace once and for all: Consider a revolution solid generated by rotating a nice curve $f(x)$ around the $x$-axis on the interval $[a,b]$ (provided that $f(x)$ does not cross the $x$-axis on this interval). Let us first find the volume of this solid, $V$. We slice the interval $[a,b]$ into small segments of width $\delta_x$. Each segment is approximately a cylinder of radius $f(x)$ and height $\delta_x$, hence having a volume of $\pi [f(x)]^2 \delta_x$. Taking the limit we get $$V = \pi \int_a^b [f(x)]^2 \, dx$$ which is the right formula: great. Now we apply the exact same argument to find the surface area of the solid, $S$. The surface area of each cylindrically-approximated segment is $2 \pi f(x) \delta_x$, and taking the limit we obtain $$S = 2 \pi \int_a^b f(x) \, dx$$ which is not correct. We would obtain the correct formula for $S$ if we take the heights of the segments to be the arc lengths of $f(x)$ over each $\delta_x$, so I suspect that what is going wrong has something to do with this. But my question is: why does this argument work for finding $V$, but not $S$? calculus integration analysis limits fake-proofs MGAMGA $\begingroup$ Duplicates here and here...? $\endgroup$ – Andrew D. Hwang Mar 2 '14 at 15:22 If it were the exact same argument, you'd get the volume again. :) Seriously, though: "Each segment is approximately a cylinder" is the critical statement: not only is it approximately a cylinder, but the difference in volume between a true cylinder of that size and the approximating one goes down linearly as $\delta$ gets smaller. Yeah, your calculus book should have mentioned that, but they didn't. Sigh. [To be more precise: the ratio of the approximating and true volumes heads to 1 as $\delta \to 0$.] (See below for correction.) For the surface area, the ratio of the true area to the approximating area is, in the limit, the ratio of the hypotenuse of a right triangle to one of the bases: the first base is $\delta$, the second is $\delta f'(x)$, and the hypotenuse therefore has length $\delta \sqrt{1 + f'(x)^2}$. The ratio of this to the base you used is $\sqrt{1 + f'(x)^2}$, and remains at this value as $\delta \to 0$. So that's the difference. I haven't proved that if the ratio goes to 1, then the computations are the same -- that'd require carefully working through the definition of the Riemann integral, and some subtleties about swapping the order of limits. But I wanted to give you an answer that at least pointed in the right direction. When asked about my claim about ratios, I realized that in trying to say things simply, I had overshot: the problem is that when the function $f$ happens to be zero at some point, the ratio might not even be defined. So let me suppose that $f$ is everywhere nonzero, and briefly discuss the remaining case at the end. I want to show that as $\delta \to 0$, the volume ratio goes to $1$. Let me say that differently: for any positive number $A$, I'll show that if $\delta$ is small enough, then the volume ratio is between $1-A$ and $1 + A$. OK? To do this, I'm going to limit my attention to values $0 < A < 1/2$, because if I can make the volume ratio lie between 1-(1/2) and 1 + (1/2), I can certainly make it lie between $1 - (1000)$ and $1 + (1000)$, etc. Since $f$ is differentiable (or the area formula doesn't make sense), we know $f$ is continuous. Now let's look at some interval $[x_1, x_2]$, and point $x$ in that interval and compare $$ V_1 = \pi f(x)^2 \delta $$ with $V_2$, the volume of the slice of the solid between $x_1 $ and $x_2$. To save writing, let's write $\delta = x_2 - x_1$. On the interval $[x_1, x_2$, $f$ has a minimum value $m$ (greater than zero, because $f$ is continuous and everywhere positive) and a maximum value $M$. That means that the cylinder of radius $m$ is contained within the slice of the solid, and the cylinder of radius $M$ contains the slice of the solid. So $V_2$ is between $\pi m^2 \delta$ and $\pi M^2 \delta$. That means that the ratio of $V_1$ to $V_2$ is between $$ \left(\frac{f(x)}{m}\right)^2 $$ and $$ \left(\frac{f(x)}{M}\right)^2 $$ Now because $A < 1/2$, the numbers $\sqrt{1 + A} > 1$ and $\sqrt{1 - A}<1$ both make sense. So we can compute $$ U = \frac{f(x)}{\sqrt{1-A}} > f(x) \text{ and} \\ L = \frac{f(x)}{\sqrt{1+ A}} < f(x), $$ a pair of numbers a little above and below $f(x)$. By picking $x_1$ and $x_2$ sufficiently close to $x$, we can ensure that on the interval $[x_1, x_2]$, the function $f$ lies between $L$ and $U$. (The proof is that $f$ is continuous, so by shrinking the interval containing $x$, you can make its image fit in any open interval, such as $(L, U)$, contiaining $f(x)$.) With that done, the remainder is a computation: the volume ratio is between $$ \left(\frac{f(x)}{m}\right)^2 $$ and $$ \left(\frac{f(x)}{M}\right)^2 $$ (Note that the first of these is LARGER than the second!) Because $m$ is at least $L$, we have that $$ \left(\frac{f(x)}{m}\right)^2 < \left(\frac{f(x)}{L}\right)^2 < \left(\frac{f(x)} {\frac{f(x)}{\sqrt{1+ A}}}\right)^2 = \left( \sqrt{1+A}\right)^2 = 1 + A. $$ A similar argument shows that the ratio is greater than $1 - A$. When we have a point where $f(x) = 0$, things get really messy, and you need to start making arguments with "min" in them, and I don't think that adds any enlightenment, so I'm going to skip it. You asked as a final question "which kinds of approximations are valid and which are not?" The answer is "The approximations are all valid (any number is an approximation of any other number!), but they're only useful if, when you push the limits through the definition of the integral, things work out." I know that sounds like a rotten answer, but I could say it differently: to know whether you can make an approximation within an integral and take limits, you need to really understand the definition of integration." That hardly seems like an unfair request. I wish I had a magic bullet for you, but I don't. John HughesJohn Hughes $\begingroup$ The same reason why approximating a circle in a grid "yields" $\pi=4$ $\endgroup$ – chubakueno Mar 2 '14 at 15:30 $\begingroup$ Is it easy to prove that "To be more precise: the ratio of the approximating and true volumes heads to 1 as δ→0."? $\endgroup$ – MGA Mar 2 '14 at 15:41 $\begingroup$ No...it's not even true, but for small reasons. See edited stuff soon to be added above. $\endgroup$ – John Hughes Mar 2 '14 at 17:21 $\begingroup$ Thanks. I guess I could rephrase my question as "Which approximations are valid, and which are not?". $\endgroup$ – MGA Mar 2 '14 at 17:24 $\begingroup$ That's brilliant, I have accepted your answer. Would I be right in saying that a similar argument would show why the surface area argument does not work? $\endgroup$ – MGA Mar 2 '14 at 19:40 Not the answer you're looking for? Browse other questions tagged calculus integration analysis limits fake-proofs or ask your own question. The staircase paradox, or why $\pi\ne4$ Why is arc length not in the formula for the volume of a solid of revolution? surface Areas using cylindrical shells Why is surface area not simply $2 \pi \int_{a}^{b} (y) dx$ instead of $2 \pi \int_a^b (y \cdot \sqrt{1 + y'^2}) dx$? Surface area by the revolution of cycloid How to determine Integraton limits? References for solid of revolution of a region which crosses the axis of revolution? Why does wolfram answer as such in this example for surface area and volume of revolution on an area crossing the axis? find the volume using disks/washers and cylindrical shells
CommonCrawl
cell discovery Structure and plasticity of silent synapses in developing hippocampal neurons visualized by super-resolution imaging Cheng Xu1,2 na1, Hui-Jing Liu2,3 na1, Lei Qi2,3, Chang-Lu Tao1, Yu-Jian Wang1, Zeyu Shen2, Chong-Li Tian2,3, Pak-Ming Lau2,3 & Guo-Qiang Bi1,2,4 Cell Discovery volume 6, Article number: 8 (2020) Cite this article Membrane trafficking Excitatory synapses in the mammalian brain exhibit diverse functional properties in transmission and plasticity. Directly visualizing the structural correlates of such functional heterogeneity is often hindered by the diffraction-limited resolution of conventional optical imaging techniques. Here, we used super-resolution stochastic optical reconstruction microscopy (STORM) to resolve structurally distinct excitatory synapses formed on dendritic shafts and spines. The majority of these shaft synapses contained N-methyl-d-aspartate receptors (NMDARs) but not α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPARs), suggesting that they were functionally silent. During development, as more spine synapses formed with increasing sizes and expression of AMPARs and NMDARs, shaft synapses exhibited moderate reduction in density with largely unchanged sizes and receptor expression. Furthermore, upon glycine stimulation to induce chemical long-term potentiation (cLTP), the previously silent shaft synapses became functional shaft synapses by recruiting more AMPARs than did spine synapses. Thus, silent shaft synapse may represent a synaptic state in developing neurons with enhanced capacity of activity-dependent potentiation. In the mammalian brain, excitatory communication between neurons is primarily mediated by glutamatergic synapses1,2. Activity-induced plasticity of these synapses is believed to underlie learning and memory function of the brain3,4,5,6. Electrophysiological studies have suggested that excitatory synapses may exhibit distinct functional properties or states7,8. An extreme case is the so-called silent synapse9,10,11,12, which contains few α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPARs) and cannot carry out excitatory transmission upon presynaptic activation, but can be converted into the functional form through activity-dependent plasticity13,14,15,16. However, the structural and morphological correlates of these functional states have been lacking. Studies with electron microscopy (EM) have indicated that most glutamatergic excitatory synapses are formed on dendritic spines, in contrast to GABAergic inhibitory synapses that are primarily formed on dendritic shafts, although exceptions have been observed that some excitatory synapses formed directly on the shafts17,18,19,20. With conventional fluorescence microscopy, it was observed that early in development, N-Methyl-D-aspartate receptors (NMDARs) clusters might form on dendritic shafts before clustering of AMPARs21. Unfortunately, the diffraction-limited resolution of conventional optical microscopy does not allow for unambiguous determination whether these receptor clusters are actual shaft synapses. Thus, a higher-resolution imaging approach is desired to establish the link between the morphological and functional states of these synapses. In the current study, we took advantage of single molecule localization-based super-resolution fluorescence microscopy22,23 and its quantitative capability, to investigate in cultured hippocampal neurons the morphology and receptor expression of different forms of excitatory synapses and their changes during development and plasticity. In the current study, we used low density culture of rat hippocampal neurons that formed synaptic connections starting from ~11 days in vitro (DIV). With immunofluorescence labeling of presynaptic scaffolding protein bassoon and postsynaptic AMPARs subunit GluA1, many synapses were visible under conventional fluorescence microscopy as fluorescent puncta with overlapping bassoon and GluA1 signals and without much discernable substructures (Fig. 1), because these synapses were usually hundreds of nanometers in size, close to the diffraction limit of optical microscopy. Thus, it is often hard to determine whether a fluorescent punctum near the dendrite is really a short spine synapse or a shaft synapse (see also Supplementary Fig. S1). Super-resolution stochastic optical reconstruction microscopy (STORM)24,25 with >10-fold improvement in resolution (Supplementary Fig. S2), has allowed for visualization of finer structural details of these synapses (Fig. 1b). Importantly, with STORM resolution dendritic GluA1 distribution facilitated visualizing dendritic profiles (Supplementary Fig. S3), it became much easier to determine whether a synapse was formed on the spine or dendritic shaft (Fig. 1b1, b2 and Supplementary Fig. S1d–f). From the STORM images, it was clear that a spine synapse generally contains postsynaptic AMPARs to oppose the presynaptic bassoon localizations. In contrast, most shaft synapses contained few AMPARs to oppose bassoon localizations (Fig. 1b2), although this was often hard to resolve in the conventional images. Fig. 1: Excitatory shaft and spine synapses revealed by STORM imaging. a, b Fluorescence microscopy of cultured hippocampal synapses with immunostaining of bassoon (green) and postsynaptic GluA1 (red). Compared to conventional imaging (a), STORM imaging (b) shows better differentiation of two synaptic morphologies: spine synapse, indicated by arrow in (b1), and shaft synapse, indicated by arrow head in (b2). Scale bars, (b): 5 µm; (b1, b2): 500 nm. c, d STORM images of shaft (c) and spine (d) synapses with immunostaining of GluN2B (green), GluA1 (red), combined with conventional fluorescence images (c1, d1) of excitatory presynaptic maker vGlut1 (blue). Scale bars: 500 nm. e Scatter plot of GluN2B and GluA1 localizations in shaft (black) and spine (red) synapses. Note that the localization number refers to the measure number of single molecule blinking event, and is much larger than the actual number of receptors (see Methods). f Histogram of NGluA1/(NGluA1 + NGluN2B) in shaft (black) and spine (red) synapses. n = 44 (shaft), 94 (spine). To determine whether these AMPARs-negative shaft synapses were excitatory silent synapses, we performed STORM imaging of NMDARs and AMPARs using antibodies against the 2B subunit of NMDARs (GluN2B) and GluA1 containing-AMPARs, respectively, in conjunction with conventional immunofluorescence imaging of vesicular glutamate transporter 1 (vGlut1). Under STORM resolution, many GluN2B positive but GluA1 negative puncta were observed with distinct line-shaped structure formed directly along the dendritic shaft (Fig. 1c). Furthermore, virtually all such line-shaped puncta on the shaft were also co-localized with vGlut1 puncta similar to the excitatory spine synapses that contained both GluN2B and GluA1 (Fig. 1c1, d1 and Supplementary Fig. S4), indicating that they were indeed excitatory synapses. However, because of the lack of GluA1-containing receptors that are the dominant AMPARs in hippocampal synapses26,27, these shaft synapses were most likely to be functionally silent. With STORM imaging, we were able to assess the expression of AMPARs and NMDARs using the number of single molecule localizations as a quantitative measure (see Methods)28. Figure 1e, f summarizes the localization numbers of GluN2B and GluA1 for all putative excitatory synapses identified by vGlut1 puncta from DIV 17 cultures. It is clear that most shaft synapses had low AMPAR proportion (defined as NGluA1/(NGluA1 + NGluN2B), see Methods) and could be classified as "silent synapses", in contrast to the majority of spine synapses that belonged to the class of "functional" synapses with higher AMPAR proportion (Fig. 1e, f and Supplementary Fig. S5). Notably, there also existed a relatively small number of spine-shaped silent synapses, consistent with previous observations using conventional immunofluorescence imaging21. With 3D STORM, we also observed that for the silent shaft synapses, GluN2B localizations appeared to be primarily on or near the cell surface (Supplementary Fig. S6a and Supplementary Movies S1). Similar surface expression pattern was also found for GluN2B and GluA1 localizations in dendritic spines (Supplementary Fig. S6b-d and Supplementary Movies S2). It is known that synapses become enriched in AMPARs during neuronal development and brain maturation21,29. With STORM imaging and analyses, we further evaluated receptor expression in individual synapses at different developmental stages. At DIV11, we found that the majority of synapses were silent shaft synapses, with a few spine synapses being either silent (with low AMPAR proportion similar to the silent shaft synapses) or functional (with higher AMPAR proportion) (Fig. 2a, d, g and Supplementary Fig. S7a). The maturation of the neurons was accompanied by a moderate decrease in the density of shaft synapses and a dramatic increase in the density of spine synapses (Supplementary Fig. S7). At DIV 16 -17 and DIV 21-23, the majority of spine synapses contained both AMPARs and NMDARs receptors and with high AMPAR proportion (Fig. 2b, c, e, f, h, i). In contrast, although a few shaft synapses contained high levels of AMPARs (Fig. 2f, i), the majority of shaft synapses at these stages were still silent, expressing much fewer AMPARs as compared to spine synapses (Fig. 2b, c, e, f, h, i). Further analyses revealed that during this period of development (from DIV16 -17 to DIV 21-23), there was a marked increase in the expression of AMPARs and NMDARs for spine synapses (Fig. 2e, f, j). However, the shaft synapses during the same period exhibited no increase in the level of NMDAR expression (Fig. 2e, f, k). We suspected that the NMDA receptor expression level was related to the physical size of shaft and spine synapses. To evaluate this, we first differentiated synaptic and extrasynaptic NMDAR localizations in visually identified synapses based on local density cluster analysis30, and then calculated the longest diagonal of the convex hull formed by the identified cluster of synaptic receptors as a measure of synaptic size (Supplementary Fig. S8a-c). Indeed, whereas spine synapses showed significant growth in size (669 ± 69 nm at DIV11,884 ± 29 nm at DIV 16-17, and 1060 ± 32 nm at DIV 21-23), shaft synapses at different stages of neuronal development had similar size (748 ± 24 nm at DIV 11; 809 ± 29 nm at DIV 16-17; 717 ± 37 nm at DIV 21-23) (Supplementary Fig. S8d). We also did the same measurements for AMPARs in DIV 16-23 spine synapses and found that AMPARs occupied a larger area than NMDARs (Supplementary Fig. S8e). To validate these measurements, we compared the data from STORM imaging with that obtained from cryo-electron tomography (cryoET). The mean size of dendritic spines measured by STORM was indeed similar to that based on cryoET measurements (Supplementary Fig. S9). Fig. 2: Expression of AMPARs and NMDARs in spine and shaft synapse at different culture stages. a–c Example dual color STORM images of GluN2B (green) and GluA1 (red) expression in shaft (a1, b1, c1) and spine (a2, b2, c2) synapses at DIV 11, 16-17 and 21-23. Scale bar, 500 nm. d–f Scatter plots of GluN2B and GluA1 localizations in shaft (black) and spine (red) synapses in culture stages corresponding to the examples in (a–c), respectively. g–i Histogram of NGluA1/(NGluA1 + NGluN2B) in shaft (black) and spine (red) synapses in culture stages corresponding to the examples in (a–c), respectively. j–k Summary of GluN2B (green) and GluA1 (red) expression in spine synapse (j) and shaft synapses (k) at different culture stages, with Error bars are standard error of the mean (SEM), with *** denoting P < 0.001, ** denoting P < 0.01, * denoting P < 0.05, n.s. denoting no significance, t-test, in this and subsequent Figs unless otherwise noted. n = 35 (shaft), 8 (spine) in DIV11; n = 51(shaft), 151(spine) in DIV16 -17; n = 30 (shaft), 171(spine) in DIV21-23. At DIV16-17 and DIV 21-23, we also noticed a tendency of increased AMPAR localizations in shaft synapses (Fig. 2e, f, k). Aside from possible contaminations from non-specific staining, this might also be related to the expression of dendritic AMPARs during development21. When we counted AMPAR localizations in non-synaptic dendritic areas, substantial receptor expression was found at all developmental stages (432.1 ± 31.7, 530.3 ± 18.5, 661.7 ± 26.7 per µm2 at DIV11, DIV16-17 and DIV21-23, respectively) (Supplementary Fig. S10). Based on these values, we could estimate the "background" AMPAR localizations for an average shaft synapse (141.0 ± 8.0, 249.8 ± 16.2, 251.1 ± 23.4 AMPAR localizations in shaft synapse at DIV11, DIV16-17 and DIV21-23 respectively). Such "background" could account for a substantial portion of the observed AMPAR localizations in these shaft synapses. In the above analyses, only synapses on proximal dendrites (<50 μm from soma) were included. When synapses on the distal segments (>100 μm from soma) of dendrites in DIV 16-23 cultures were examined, we found that the majority (71.1%) of them were shaft synapses (Fig. 3a, b, b1, d). In contrast, spine synapses were dominant (83.5%) in proximal dendrites of the same neurons (Fig. 3a, c, c1, d). This is consistent with the observation that shaft synapses form earlier in development than spine synapses as distal dendrites are relatively young compared to the proximal segments. Taken together, these results also suggest that the silent shaft synapse could represent a "young" synaptic state, and over time, may be converted into or replaced by functional spine synapses. Fig. 3: Differential distribution of shaft and spine synapses along neuronal dendrites. a Stitched conventional fluorescence images showing distal (green box) and proximal (red box) dendritic segments of a hippocampal neuron. b STORM images of the distal segments in green box of (a). A magnified view of a shaft synapse, arrow head in (b) is shown in (b1). c STORM images of the proximal segments in red box of (b). A magnified view of a spine synapse, arrow in (c) is shown in (c1). Scale bars in (a): 10 µm; (b, c): 2 µm, (b1, c1): 500 nm. d Summary of synapse density in distal (n = 21) and proximal (n = 12) segments for shaft and spine synapses. It is well known that silent synapses characterized by physiological criteria can be rapidly converted into functional ones via activity-dependent synaptic plasticity, e.g. long-term potentiation (LTP)13,15. To investigate plasticity-related changes of molecular organization in silent shaft synapses, we used brief glycine exposure to induce chemical LTP (cLTP)16,31 in cultured neurons at DIV17-18. In previous studies, we have used the cLTP protocol in the same culture to induce functional changes as measured by patch-clamp recording32. Live-cell confocal imaging also revealed long-lasting glycine-induced spine enlargement, confirming the effectiveness of the protocol (Supplementary Fig. S11). With STORM imaging, we observed dramatic recruitment of AMPARs within the postsynaptic area of shaft synapses in cLTP group (Fig. 4), as well as overall increases in synaptic size (Supplementary Fig. S12a, b, e). Quantitative analysis (see Methods)28 revealed that whereas cLTP did not significantly alter the synaptic content of NMDARs for either shaft or spine synapse (Fig. 4e), it caused substantial increase in the synaptic content of AMPARs for both synapse types (Fig. 4f). Furthermore, cLTP apparently recruited more AMPARs to shaft synapses (AMPAR localizations from 90.5 ± 12.2 in control group to 283.5 ± 23.3 in glycine-stimulated group) than to spine synapses (from 189.6 ± 13.5 to 295.1 ± 16.0), such that the resulted AMPARs in the two types of synapses reached a similar level (Fig. 4f). There was no substantial change in the proportion of spine synapses during cLTP (65.6%, 84 out of 128 in control group and 71.2%,104 out of 146 in glycine-stimulated group). Therefore, the silent shaft synapses were more "potentiable" than spine synapses. Notably, AMPARs in the "potentiated" shaft synapses (most of which were presumably silent prior to the cLTP induction) generally occupied longer distribution length than NMDARs did (Fig. 4b and Supplementary Fig. S13), similar to functional spine synapses (Fig. 2b, c and Supplementary Fig. S8e) and consistent with the observation that LTP involved extrasynaptic insertion and lateral diffusion of AMPARs33,34. Fig. 4: Differential changes of receptor expression in spine and shaft synapse accompanying chemically induced long-term potentiation. a, b Dual color STORM images of GluN2B (green) and GluA1 (red) expression in control (a) and glycine-stimulated (b) neurons, with magnified views of a spine synapse (arrow) shown in a1 and b1, and a shaft synapse (arrow head) shown in (a2, b2). Scale bars: 500 nm. c, d Scatter plots of GluN2B and GluA1 localizations in shaft (black) and spine (red) synapses in control (c) and glycine-stimulated (d) groups. e, f Summary of GluN2B (e) and GluA1 (f) localizations in control and glycine-stimulated groups. n = 44 (shaft) and 84 (spine) in control group, and n = 42 (shaft) and 104 (spine) in glycine-stimulated group. The high resolution and molecular specificity offered by STORM imaging allow for visualization of various structural features of excitatory synapses in developing hippocampal neurons, and raise interesting issues regarding their functionality. In particular, we observed that a large number of synapses were formed directly on dendrite shafts and that the majority of silent synapses were shaft synapses, lacking postsynaptic compartmentalization imposed by the thin neck of dendritic spines35,36. Thus, compared to the functional spine synapses, the silent shaft synapses would have spatially less confined signaling, such as previously observed spread of calcium14, during plasticity induction. This property may also allow for easier recruitment of external resources for plasticity expression. Indeed, we observed a greater increase in AMPARs content in shaft synapses than in spine synapses after cLTP induction, consistent with recent observations that adding AMPARs to established functional synapses required more remodeling events than to synapses with few receptors37 (Fig. 4). A possible mechanism is the lack of a restricting spine neck that may allow for easier recruitment of additional molecules to shaft synapses. Furthermore, we observed that after cLTP induction, the localization number of AMPARs in a shaft synapse was similar to that in a spine synapse, suggesting that both types of synapses may have similar number of "slots" for AMPARs38,39. We observed more silent shaft synapses in early developmental stages, suggesting that they may represent a form of "young" synapses that eventually "maturate" into functional spine synapses, as has been noted in various systems9. Apparently, this maturation process can be accelerated by LTP induction, with fast formation of functional shaft synapse (that expresses AMPARs) as an intermediate stage. Similar functional shaft synapses were indeed observed in cultures of different stages (Fig. 2e, f). Thus, with super-resolution fluorescence imaging, we have identified at least four structurally distinct classes of excitatory synapses: the silent shaft synapses, the functional shaft synapses, the silent spine synapses, and the functional spine synapses. Among them, the functional shaft synapses and silent spine synapses were less frequent, suggesting that they may represent "transitional" states during synaptic development and plasticity. Such structural heterogeneity is likely to underlie different functional states of synapses possessing distinct properties in synaptic plasticity7,8,40. This could permit richer dynamics in the modification of neural circuits, and thus play important roles in learning and memory functions as suggested by theoretical studies41. Furthermore, it is possible that more synaptic states can be revealed when the spatial expression of additional synaptic proteins are evaluated. Along this line, future studies may reveal the functional correlates of these synaptic states and the transition among them. Primary culture of hippocampal neurons were prepared following established protocol32 with minor modifications. Briefly, hippocampi were dissected out from brains of fetal rat at embryonic day 18, followed by digestion in 0.25% trypsin (Sigma, St. Louis, MO, USA) for 15 mins at 37°C. The digested tissues were then washed twice with Hank's Balance Salt Solution (HBSS) buffer (Invitrogen San Diego, CA, USA) and triturated with a fire-polished glass pipette in plating medium (containing neurobasal medium (Invitrogen) supplemented with 1% Glutamax (Invitrogen), 2% B27 (Invitrogen), 1% 3.75 M NaCl (Sigma), 0.1% 25 mM l-glutamic acid (Sigma)). Cells were plated at densities of 30–40/mm2 on poly-l-lysine (Sigma) coated glass coverslips (Assistant, Sondheim, Germany) in petri dishes (Corning, Oneonta, NY, USA), and then grown in incubators maintained at 37°C and 5% CO2. At DIV 5, half of the culture medium was replaced with maintenance medium which is similar to the plating medium without addition of 0.1% 25 mM l-glutamic acid. Afterwards, 20% of the culture medium was replaced with fresh maintenance medium every 3 – 4 days. For cryoET imaging, primary culture of hippocampal neurons were grown on poly-l-lysine coated gold EM grids (Quantifoil, Au NH2 R2/2) as described previously20,42. Chemical LTP induction For cLTP induction, neurons grown on coverslips were first transferred to Mg2+-free extracellular solution (containing 150 mM NaCl, 3 mM KCl, 3 mM CaCl2, 10 mM HEPES, 5 mM glucose, 0.5 μM tetrodotoxin, 1 μM strychnine, 20 μM bicuculline methiodide, all from Sigma) and were incubated at room temperature for 10 mins. Stimulation was given by 3-min exposure to 200 μM glycine (Sigma) in the same Mg2+-free extracellular solution. After glycine stimulation, the coverslips were transferred back to original Mg2+-free extracellular solution for 20 min, followed by immunofluorescence staining and imaging. Antibody labeling and immunostaining Cyanine Dye3 (3 μg, GE Healthcare, Little Chalfont, Buckinghamshire, UK) or AlexaFluor405 (3 μg, Invitrogen, Eugene, Oregon, USA) was mixed with 1 μg AlexaFluor647 (Invitrogen), 10 μmol NaHCO3 and 100 μg antibody (Jackson ImmunoResearch, West Grove, PA, USA) in 100 μl PBS with gentle agitation at room temperature for 30 min. During reaction the Nap5 gel-filtration column (GE Healthcare) was equilibrated with 3 volumes of PBS. A UV–Vis spectrophotometer was used to detect the the number of dye labeled in single antibody (1.5–3.0 activator and 0.4-0.8 reporter labeled in one antibody should be perfect for multi-color STORM imaging). Cultured neurons were fixed by 20-min incubation in PBS (137 mM NaCl, 2.7 mM KCl, 10 mM Na2HPO4, 2 mM KH2PO4) containing 3% paraformaldehyde, then permeabilized by 0.2% Triton-X100 in PBS for 6 mins. After 1-h blocking with 3% BSA (in PBS), the sample were incubated in 3% BSA (in PBS) containing appropriate one or more of the following primary antibodies: rabbit-anti-GluA1 (31232 from Abcam, Cambridge, MA, USA), mouse-anti-Bassoon (13249 from Abcam), mouse-anti-GluN2B (610416 from BD Bioscience, San Jose, CA, USA), or guinea pig-anti-vGlut (135304 from Synaptic system, Gottingen, Germany) at 4°C overnight (20 μg/ml GluA1 and 5 μg/ml GluN2B for STORM staining), followed by incubation in appropriate fluorescently labeled secondary antibodies (Rabbit antibody conjugated with Cy3-Alexa647 STORM pair labeled to GluA1, and mouse antibody conjugated with Alexa405-Alexa647 STORM pair labeled to GluN2B or bassoon, Guinea pig conjugated with Alexa488 binding to vGlut when necessary)(Jackson ImmunoResearch) at room temperature for 40 mins. Post fixation for 20 min with 3% paraformaldehyde in PBS was performed to preserve fluorescent signals for longer periods of storage. All antibody dilution ratio and incubation time were kept consistent in control and glycine stimulation group. STORM imaging All imaging experiments were performed on a custom built STORM setup. The optical system consists of an inverted fluorescence microscope (Olympus, Tokyo, Japan) with a 100X oil NA1.4 objective, a translational stage (Applied Scientific Instrumentation, Eugene, OR, USA), a set of solid state lasers with output wavelengths of 405, 460, 488, 639 nm (Coherent, Santa Clara, CA, USA), 560 nm (MPB, Pointe-Claire, QC, Canada) and 532 nm (Oxxius) to provide controlled illumination light through an AOTF (Crystal Technology Inc., Palo Alto, CA, USA), an EMCCD (Andor, Belfast, UK) attached to the microscope through a Dual View image splitter (Photometrics, Tucson, AZ, Canada), into which a cylindrical lens of 1 m focal length were inserted for 3D STORM. Before STORM imaging, fixed cells were immersed in fresh imaging buffer containing 80% PBS, 10% 50%(w/v) Glucose, 10% 1 M mercaptoethylamine, with addition of 1% oxygen scavenger buffer made by 8 mg glucose oxidase and 160 μg catalase dissolved in 100 μl PBS after sufficient mixing and 1 min centrifuge. Weak 639 nm illumination was used to acquire conventional wide-field images of the samples and to identify areas of interest containing healthy dendritic and synaptic structures, usually within 50μm from the soma for proximal synapses or at least 100 μm from the soma for distal synapses. In subsequent STORM image acquisition, time series of single molecule fluorescence images were acquired at 60 Hz, with a periodic illumination pattern consisting of one activation frame followed by three imaging frames28. Averaging all frames of single molecule signals also results in equivalent "wide-field" images. For STORM imaging in cLTP experiments, activation-imaging cycles were repeated until virtually all fluorophores in both GluN2B and GluA1 channels depleted. This together with consistent fluorophore labeling and antibody staining allowed for fair comparison of receptor levels in shaft and spine synapses in control and glycine groups. STORM data processing Identification and fitting of single molecule localizations, as well as STORM image reconstruction were conducted using custom software as previously described28. In STORM imaging, one "localization" refers to an on-off switching (blinking) event of a single fluorescence molecule (Alexa647) captured by the high-speed camera. Typically, a receptor protein was labeled by a few secondary antibody molecules, and each Alexa647 fluorophore on an antibody molecule could blink multiple times before being photo-bleached24. Thus, the measured number of localizations should be largely proportional to, but generally much larger than the actual number of antigen (e.g. AMPAR or NMDAR) protein molecules. We used the localization number of each identified synapse to quantify the relative expression level of synaptic proteins, as did in previous studies28. Synapses along selected dendritic segments in reconstructed STORM images were visually identified based on the morphological features revealed by localizations of both dendritic AMPARs and synaptic proteins. The relatively even distribution of dendritic GluA1 localizations allowed for visualization of dendritic shaft profiles (Supplementary Fig. S3), whereas the clustered distribution of bassoon, AMPAR and NMDAR localizations helped identify pre- and postsynaptic compartments (Figs. 1b–d, 2a–c, 3b, c, 4a, b); Supplementary Fig. S3). At STORM resolution, spine synapses, even those with very short necks could be identified rather easily because the position of their synaptic protein clusters were away from dendritic shaft profiles (Fig. 1b1 and Supplementary Fig. S1d-f). Meanwhile, shaft synapses could be identified based on their distinct line-shaped clusters of GluN2B localizations that were located inside the dendritic shaft profiles (Fig. 1b2 and Supplementary Fig. S1d-f). About 20% of receptor clusters could not be identified as spine or shaft synapses based on the above criteria and were categorized as "uncertain" type. Synaptic expression of AMPARs and NMDARs were quantified by counting the total localization numbers within ROIs that enclosed identified postsynaptic compartments. Dendritic AMPAR expression density was evaluated based on the number of AMPAR localizations within randomly selected dendritic areas. The average AMPAR expression density was then multiplied by the area of an identified shaft synapse (calculated from x, y coordinate in reconstructed image) to obtain the "background" expression level of AMPARs. At the first order approximation, this localization number is proportional to the number of target protein molecules for the same batch of experiments when the labeling and imaging conditions are kept the same. Calibration of crosstalk for dual color STORM imaging We noticed that signals from the Cy3-Alexa647 channel (GluA1) could be detected in presynaptic areas and account for ~7% of total localizations from both channels, whereas ~15% localizations in the postsynaptic area were from the Alexa405-Alexa647 channel (bassoon). This could come from non-specific antibody binding and fluorescence activation crosstalk between the two channels (i.e. Alexa405-Alexa647 pair vs Cy3-Alexa647 pair). The latter was minimized by the following calibration procedure. Assuming that the acquired localization numbers from the two channels are D1 and D2, which are from the real signals d1 and d2. Considering non-specific activation b1 (the portion of d1 measured as D2) and b2 (the portion of d2 measured as D1), a set of transfer equations can be expressed as $${{D}}_1 = {{a}}_1 \ast {{d}}_1 + {{b}}_2 \ast {{d}}_2,$$ $${{D}}_2 = {{b}}_1 \ast {{d}}_1 + {{a}}_2 \ast {{d}}_2,$$ $${{a}}_1 + {{b}}_1 = 1,$$ $${{a}}_2 + {{b}}_2 = 1.$$ The parameters a1, a2, b1, b2 can be obtained from two calibration experiments using samples containing single fluorophores (i.e., d1=0, D1 + D2 = d2 for one condition, and d2 = 0, D1 + D2 = d1 for the other). Solving Eqs. (1) and (2), the calibrated localization number d1 and d2 can be expressed as \({{d}}_1 = {\upalpha}_1 \ast {{D}}_1 + {\upbeta}_2 \ast {{D}}_2,\) $${{d}}_2 = {\upbeta}_1 \ast {{D}}_1 + {\upalpha}_2 \ast {{D}}_2.$$ The parameters ɑ1, β1, ɑ2, β2 are \({\upalpha}_1 = {{a}}_2/({{a}}_1 \ast {{a}}_2 - {{b}}_1 \ast {{b}}_2),\) \({\upbeta}_1 = {{b}}_1/({{b}}_1 \ast {{b}}_2 - {{a}}_1 \ast {{a}}_2),\) \({\upbeta}_2 = {{b}}_2/({\mathrm{b}}_1 \ast {{b}}_2 - {{a}}_1 \ast {{a}}_2).\) In this study, the calibrated localizations were used for all quantitative analysis of NMDAR and AMPAR expression in shaft and spine synapses. In a typical experiment, for example, where D1 and D2 were measured as the numbers of localizations for Alexa405-Alexa647 and Cy3-Alexa647 channels, respectively. We obtained that a1=0.954, a2=0.842, b1=0.046, b2=0.158, thus ɑ1 = 1.0578, β1 = -0.0578, ɑ2 = 1.1986, β2 = −0.1986. Identification of silent synapses For identified excitatory synapses in Fig. 1e, the AMPAR proportion values defined as NGluA1/(NGluA1 + NGluN2B) exhibited bimodal characteristics, and was well fitted with two Gaussians (Supplementary Fig. 5), one representing silent synapse population and the other functional synapse population. The intersection of the two curves was at NGluA1/(NGluA1 + NGluN2B) = 0.37. Empirically, we used this value to separate silent synapses from functional synapses. A synapse with low AMPAR proportion, i.e. NGluA1/(NGluA1 + NGluN2B) < 0.37, is considered a silent synapse. Differentiation of synaptic and extrasynaptic localizations We adapted a local density analysis approach similar to that reported in previous studies30 to distinguish synaptic localization signal and background noise. In brief, randomly distributed N (n = 8000–400,000) localizations in an area of S = 1600μm2 were simulated with an average density d = N/S. For each localization, its nearest neighbor distance was obtained as NND(i). Then the median NND (mNND) of all N points for each simulation was obtained. Through fitting a series of mNNDs versus average densities, a standard median NND (stmNND) was calculated as a function of the average density, which equals 471/sqrt(d). For one specific STORM image of neuron, the average density of whole dendritic region and its specific stmNND were calculated first based on simulated stmNND function. The local density corresponding to each localization was defined as the number of neighboring localizations within 2.5 times stmNND. Then, localizations in visually identified synapse with higher local density than the average of all localizations in the dendritic region was considered as signal and the rest as background noise (Supplementary Fig. S8a2, b2, c2). After all of STORM signal was identified by local density, the longest diagonal of the convex hull of all receptor localizations within the synapse was defined as a measurement of synaptic length (Supplementary Fig. S8a3, b3, c3). Live cell imaging of glycine stimulation We transfected plasmid of actin-mcherry in DIV11. After 6-7days expression, Cells were transferred to a custom chamber for live cell imaging. during imaging acquisition cells were perfused in more than 10 mins Mg2+-free extracellular solution as baseline, and 3 mins 200 μM glycine in the same Mg2+-free extracellular solution, and more than 20 mins Mg2+-free extracellular solution in sequence. During the whole procedure, N.A.1.45 oil objective in spinning disk confocal microscopy was used to monitor synapse morphology change. CryoET imaging Neuronal cultures on EM grids in DIV16 were transferred to extracellular solution (ECS containing 150 mM NaCl, 3 mM KCl, 3 mM CaCl2, 2 mM MgCl2, 10 mM HEPES, and 5 mM glucose, pH 7.3), and then rapid vitrified with a plunge freezer (Vitrobot IV, FEI, Netherland). The frozen grids were stored in liquid nitrogen until use. CryoET data was collected using a 200KV transmission electron microscope (Tecnai F20, FEI) equipped with a K2 Summit direct electron detector (K2 camera, Gatan). Tilt series were collected from 0° to −54°, and then from + 3° to + 60°, at of 3° intervals using SerialEM43, with the defocus value set at −6 to −10 µm, and the total electron dosage of 100 e/Å2. The images were acquired using K2 camera in counting mode with a final pixel size of 0.565 nm. Tilt series were aligned and reconstructed using IMOD44. The measurement of PSD was performed using 3dmod in IMOD package. Statistics were presented as Mean ± SEM, with ***P < 0.001, **P < 0.01, *P < 0.05, n.s. denoting no significance. Two tailed t-test was used to verify statistical difference between two groups. Paired t-test was used to determine statistical difference between GluN2B and GluA1 distribution length within synapses. For all statistical tests, P value < 0.05 was considered as statistical significant difference. All supporting data are available from the authors upon request. Sudhof, T. C. Towards an understanding of synapse formation. Neuron 100, 276–293 (2018). Volk, L., Chiu, S. L., Sharma, K. & Huganir, R. L. Glutamate synapses in human cognitive disorders. Annu Rev. Neurosci. 38, 127–149 (2015). Kandel, E. R., Dudai, Y. & Mayford, M. R. The molecular and systems biology of memory. Cell 157, 163–186 (2014). Bi, G. & Poo, M. Synaptic modification by correlated activity: Hebb's postulate revisited. Annu Rev. Neurosci. 24, 139–166 (2001). Martin, S. J., Grimwood, P. D. & Morris, R. G. Synaptic plasticity and memory: an evaluation of the hypothesis. Annu Rev. Neurosci. 23, 649–711 (2000). Alvarez, V. A. & Sabatini, B. L. Anatomical and physiological plasticity of dendritic spines. Annu Rev. Neurosci. 30, 79–97 (2007). Bi, G. Q. & Poo, M. M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 (1998). Montgomery, J. M. & Madison, D. V. Discrete synaptic states define a major mechanism of synapse plasticity. Trends Neurosci. 27, 744–750 (2004). Hanse, E., Seth, H. & Riebe, I. AMPA-silent synapses in brain development and pathology. Nat. Rev. Neurosci. 14, 839–850 (2013). Kerchner, G. A. & Nicoll, R. A. Silent synapses and the emergence of a postsynaptic mechanism for LTP. Nat. Rev. Neurosci. 9, 813–825 (2008). Arendt, K. L., Sarti, F. & Chen, L. Chronic inactivation of a neural circuit enhances LTP by inducing silent synapse formation. J. Neurosci. 33, 2087–2096 (2013). Morales, M. & Goda, Y. Nomadic AMPA receptors and LTP. Neuron 23, 431–434 (1999). Liao, D., Hessler, N. A. & Malinow, R. Activation of postsynaptically silent synapses during pairing-induced LTP in CA1 region of hippocampal slice. Nature 375, 400–404 (1995). Durand, G. M., Kovalchuk, Y. & Konnerth, A. Long-term potentiation and functional synapse induction in developing hippocampus. Nature 381, 71–75 (1996). Gomperts, S. N., Rao, A., Craig, A. M., Malenka, R. C. & Nicoll, R. A. Postsynaptically silent synapses in single neuron cultures. Neuron 21, 1443–1451 (1998). Lu, W. et al. Activation of synaptic NMDA receptors induces membrane insertion of new AMPA receptors and LTP in cultured hippocampal neurons. Neuron 29, 243–254 (2001). Colonnier, M. Synaptic patterns on different cell types in the different laminae of the cat visual cortex. An electron microscope study. Brain Res. 9, 268–287 (1968). Kasthuri, N. et al. Saturated reconstruction of a volume of neocortex. Cell 162, 648–661 (2015). Sheng, M. & Hoogenraad, C. C. The postsynaptic architecture of excitatory synapses: a more quantitative view. Annu Rev. Biochem 76, 823–847 (2007). Tao, C. L. et al. Differentiation and characterization of excitatory and inhibitory synapses by cryo-electron tomography and correlative microscopy. J. Neurosci. 38, 1493–1510 (2018). Liao, D., Zhang, X., O'Brien, R., Ehlers, M. D. & Huganir, R. L. Regulation of morphological postsynaptic silent synapses in developing hippocampal neurons. Nat. Neurosci. 2, 37–43 (1999). Hell, S. W. Far-field optical nanoscopy. Science 316, 1153–1158 (2007). Huang, B., Babcock, H. & Zhuang, X. W. Breaking the diffraction barrier: super-resolution imaging of cells. Cell 143, 1047–1058 (2010). Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–795 (2006). Huang, B., Wang, W., Bates, M. & Zhuang, X. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319, 810–813 (2008). Lu, W. et al. Subunit composition of synaptic AMPA receptors revealed by a single-cell genetic approach. Neuron 62, 254–268 (2009). Anggono, V. & Huganir, R. L. Regulation of AMPA receptor trafficking and synaptic plasticity. Curr. Opin. Neurobiol. 22, 461–469 (2012). Dani, A., Huang, B., Bergan, J., Dulac, C. & Zhuang, X. Superresolution imaging of chemical synapses in the brain. Neuron 68, 843–856 (2010). Petralia, R. S. et al. Selective acquisition of AMPA receptors over postnatal development suggests a molecular basis for silent synapses. Nat. Neurosci. 2, 31–36 (1999). Tang, A. H. et al. A trans-synaptic nanocolumn aligns neurotransmitter release to receptors. Nature 536, 210–214 (2016). Jaafari, N., Henley, J. M. & Hanley, J. G. PICK1 mediates transient synaptic expression of GluA2-lacking AMPA receptors during glycine-induced AMPA receptor trafficking. J. Neurosci. 32, 11618–11630 (2012). Fu, Z. X. et al. Dendritic mitoflash as a putative signal for stabilizing long-term synaptic plasticity. Nat. Commun. 8, 31 (2017). Lin, D. T. et al. Regulation of AMPA receptor extrasynaptic insertion by 4.1N, phosphorylation and palmitoylation. Nat. Neurosci. 12, 879–887 (2009). Groc, L. et al. Differential activity-dependent regulation of the lateral mobilities of AMPA and NMDA receptors. Nat. Neurosci. 7, 695–696 (2004). Nimchinsky, E. A., Sabatini, B. L. & Svoboda, K. Structure and function of dendritic spines. Annu Rev. Physiol. 64, 313–353 (2002). Yuste, R. Dendritic spines and distributed circuits. Neuron 71, 772–781 (2011). Sinnen, B. L. et al. Optogenetic control of synaptic composition and function. Neuron 93, 646–660 e645 (2017). Czondor, K. et al. Unified quantitative model of AMPA receptor trafficking at synapses. Proc. Natl Acad. Sci. USA 109, 3522–3527 (2012). Lisman, J. & Raghavachari, S. A unified model of the presynaptic and postsynaptic changes during LTP at CA1 synapses. Sci. STKE 2006, re11 (2006). Gerkin, R. C., Nauen, D. W., Xu, F. & Bi, G. Q. Homeostatic regulation of spontaneous and evoked synaptic transmission in two steps. Mol. Brain 6, 38 (2013). Fusi, S., Drew, P. J. & Abbott, L. F. Cascade models of synaptically stored memories. Neuron 45, 599–611 (2005). Sun, R. et al. An efficient protocol of cryo-correlative light and electron microscopy for the study of neuronal synapses. Biophys. Rep. 5, 111–122 (2019). Mastronarde, D. N. Automated electron microscope tomography using robust prediction of specimen movements. J. Struct. Biol. 152, 36–51 (2005). Kremer, J. R., Mastronarde, D. N. & McIntosh, J. R. Computer visualization of three-dimensional image data using IMOD. J. Struct. Biol. 116, 71–76 (1996). The authors thank Xiaowei Zhuang for suggestions on experimental design and comments on the manuscript, Hazen Babcock for advices on imaging system, Bin Zhang for help with culture preparation, Junjie Hao, Jiang He, and Ruobo Zhou for helpful discussions. This work was supported in part by the National Natural Science Foundation of China (31630030) and by the Strategic Priority Research Program of Chinese Academy of Science (XDB32030200). Cheng Xu was supported by fellowships from the Chinese Scholarship Council. These authors contributed equally: Cheng Xu, Hui-Jing Liu Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei, Anhui, 230027, China Cheng Xu, Chang-Lu Tao, Yu-Jian Wang & Guo-Qiang Bi School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, 230027, China Cheng Xu, Hui-Jing Liu, Lei Qi, Zeyu Shen, Chong-Li Tian, Pak-Ming Lau & Guo-Qiang Bi CAS Key Laboratory of Brain Function and Disease, University of Science and Technology of China, Hefei, 230027, China Hui-Jing Liu, Lei Qi, Chong-Li Tian & Pak-Ming Lau CAS Center for Excellence in Brain Science and Intelligence Technology, and Innovation Center for Cell Signaling Network, University of Science and Technology of China, Hefei, Anhui, 230027, China Guo-Qiang Bi Cheng Xu Hui-Jing Liu Lei Qi Chang-Lu Tao Yu-Jian Wang Zeyu Shen Chong-Li Tian Pak-Ming Lau G. Q. B and P. M. L. conceived and supervised the project. C. X., H. J. L., and L.Q. designed and implemented the experiments, and collected and analyzed data. Y. J. W. and Z. S. developed custom program for synaptic length analysis. C. L. Tao and C. L. Tian implemented cryoET experiments. G. Q. B., P. M. L., and C. X. wrote the paper. All authors participated in discussion of results and preparation of the manuscript. Correspondence to Pak-Ming Lau or Guo-Qiang Bi. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary movie S1 Xu, C., Liu, HJ., Qi, L. et al. Structure and plasticity of silent synapses in developing hippocampal neurons visualized by super-resolution imaging. Cell Discov 6, 8 (2020). https://doi.org/10.1038/s41421-019-0139-1 Received: 11 April 2019 Ketamine and Calcium Signaling—A Crosstalk for Neuronal Physiology and Pathology Malwina Lisek , Ludmila Zylinska & Tomasz Boczek International Journal of Molecular Sciences (2020) For Authors & Referees Cell Discovery ISSN 2056-5968 (online)
CommonCrawl
Math 370 at UMB, Spring 2013 class home homework Kyle Rogoff Villot Mars Ethan Bolker robmoray Nicole Nocera Lin Han Shira Kaminsky Matt Lehman Mingzhi Liu Rachel Bay Chaparro jun chen yawouz Angela Rogers Frank Thanh Cao PowellVacha Jean Mars Shane McKenna zhaopeng zhou February 26 Discussion Here is a link to Euclid's proof of the non-finitude of the prime numbers. While reading, keep in mind that Euclid's use of the word "measured" corresponds to our "divided". His method differs from the way I remember learning the proof, where a number is constructed as the product of the first n primes + 1, the nth prime being the greatest. Then either that number is prime, or is a product of some prime greater than the greatest prime. So by contradiction, there is no greatest prime. Instead Euclid essentially says, given a certain number of primes, one can always find another, which is a simpler argument. His approach demonstrates the rejection of the concept of an actual infinity, which took 2300 years to be legitimized by Gödel in the late 1800s, which I hope we'll get to cover later in the course. ===Matt=== Note that I did add a more modern proof of the infinitude of primes theorem. For those looking for this proof, it can be found here. For those looking for a more comprehensive collection of Euclid's elements, Clark University (of Worcester, MA) hosts on one of its virtual servers what appears to be a complete collection of Euclid's Elements available to read online. Feel free to check this out here, and note that you can navigate through Elements using the drop-down box at the bottom of the landing page. Props to those that know the mathematical significance behind the subdomain of www.clarku.edu that this is listed under. I learned the proof for the infinitude of primes in a way that more closely resembles Matt's (above). However, as Euclid's proofs have begun to make more sense as we have progressed to number theory, with the algebraic aspect providing more comfort in working with the geometrical representations that the Greek's were accustomed to, I do actually quite like Euclidean proof that we covered in class. I did have a chance to take a look at this theorem, however, which does have a Wikipedia entry. While the topological proof concept does look highly interesting, I have absolutely no experience with topology, and probably would not do so well with understanding the proof. Euler's proof (he is another respected historical mathematician, at least from my perspective) would be interesting to examine if we have time, as his use of a series coupled with the Fundamental Theorem of Arithmetic (see the article) was quite interesting. EDIT: One more interesting fact: According to Wikipedia, the largest perfect number was discovered this year (2013) by Cooper, Woltman, Kurowski, et al. It is 34,850,340 digits long and is the 48th perfect number discovered thus far. Woltman and Kurowski helped to discover perfect numbers 37-48, though only Woltman was on the team that discovered the 36th perfect number. Also, the Cooper that discovered this year's perfect number is also the Cooper who found the most recently discovered Mersenne Prime via GIMPS. Of course it's the same Cooper, since it's the same discovery - Euler proved the one to one correspondence between Mersenne primes and even perfect numbers. Ethan Should I put my class note on the discussion page or on the lecture page?? My notes has been "reformatted" twice for the last two classes; some of my stuff were left, but my name wasn't acknowledged. -Mingzhi I think it's fine when people rework other people's contributions on the lecture notes, since that page is supposed to be a record of what was said in class. Wikidot records page changes, so I know you have contributed. Use this page for (signed) comments with your observations and remarks. Ethan. Another way to look at Euclid's Proposition 29, Book X is to consider the plane numbers ab and cd where $\frac { a }{ b } =\frac { c }{ d }$, and neither ab nor cd are odd. The Pythagorean Triple resulting from these two plane numbers are given as: \begin{align} { \left( ad \right) }^{ 2 }+{ \left( \frac { \left| ab-cd \right| }{ 2 } \right) }^{ 2 }={ \left( \frac { ab+cd }{ 2 } \right) }^{ 2 } \end{align} Since $ad=bc$, the value of $ad$ in the first parenthesis can be replaced with $bc$. For example, take the plane numbers 18 and 32. These are plane numbers because their factors are in the proportion $\frac { 3 }{ 6 } =\frac { 4 }{ 8 }$. If we plug them into the equation above, we get: \begin{align} { \left( 3\cdot 8 \right) }^{ 2 }+{ \left( \frac { \left| 18-32 \right| }{ 2 } \right) }^{ 2 }={ \left( \frac { 18+32 }{ 2 } \right) }^{ 2 } \end{align} \begin{align} { \left( 24 \right) }^{ 2 }+{ \left( \frac { \left| -14 \right| }{ 2 } \right) }^{ 2 }={ \left( \frac { 50 }{ 2 } \right) }^{ 2 } \end{align} \begin{align} { \left( 24 \right) }^{ 2 }+{ \left( 7 \right) }^{ 2 }={ \left( 25 \right) }^{ 2 } \end{align} It is also important to note that this is a primitive Pythagorean triple. The 3-4-5 triple is obtained by using the plane numbers 2 and 4. (Powell)
CommonCrawl
Creator / Author: Publisher's Accepted Manuscript DOE PAGES Journal Article: Reanalysis of the Trotter Tibia Quandary and its Continued Effect on Stature Estimation of Past‐Conflict Service Members Title: Reanalysis of the Trotter Tibia Quandary and its Continued Effect on Stature Estimation of Past‐Conflict Service Members Lynch, Jeffrey James [1]; Brown, Carrie [1]; Palmiotto, Andrea [2]; Maijanen, Heli [2]; Damann, Franklin [1] Defense POW/MIA Accounting Agency 106 Peacekeeper Drive, Bldg 301 Offutt Air Force Base NE 68113‐4006 Defense POW/MIA Accounting Agency 590 Moffet Street, Bldg 4077 Joint Base Pearl Harbor‐Hickam HI 96853‐5330 Journal of Forensic Sciences Journal Name: Journal of Forensic Sciences Journal Volume: 64 Journal Issue: 1; Journal ID: ISSN 0022-1198 Lynch, Jeffrey James, Brown, Carrie, Palmiotto, Andrea, Maijanen, Heli, and Damann, Franklin. Reanalysis of the Trotter Tibia Quandary and its Continued Effect on Stature Estimation of Past‐Conflict Service Members. United States: N. p., 2018. Web. doi:10.1111/1556-4029.13806. Lynch, Jeffrey James, Brown, Carrie, Palmiotto, Andrea, Maijanen, Heli, & Damann, Franklin. Reanalysis of the Trotter Tibia Quandary and its Continued Effect on Stature Estimation of Past‐Conflict Service Members. United States. doi:10.1111/1556-4029.13806. Lynch, Jeffrey James, Brown, Carrie, Palmiotto, Andrea, Maijanen, Heli, and Damann, Franklin. Mon . "Reanalysis of the Trotter Tibia Quandary and its Continued Effect on Stature Estimation of Past‐Conflict Service Members". United States. doi:10.1111/1556-4029.13806. title = {Reanalysis of the Trotter Tibia Quandary and its Continued Effect on Stature Estimation of Past‐Conflict Service Members}, author = {Lynch, Jeffrey James and Brown, Carrie and Palmiotto, Andrea and Maijanen, Heli and Damann, Franklin}, doi = {10.1111/1556-4029.13806}, journal = {Journal of Forensic Sciences}, Free Publicly Available Full Text Accepted Manuscript (Publisher) Publisher's Version of Record Works referenced in this record: How "Standardized" is Standardized? A Validation of Postcranial Landmark Locations journal, July 2014 Smith, Ashley C.; Boaks, Amelia Journal of Forensic Sciences, Vol. 59, Issue 6 Interobserver Variation of Selected Postcranial Skeletal Measurements Adams, Bradley J.; Byrd, John E. DOI: 10.1520/JFS15550J Estimation of stature from long bones of American Whites and Negroes journal, December 1952 Trotter, Mildred; Gleser, Goldine C. American Journal of Physical Anthropology, Vol. 10, Issue 4 DOI: 10.1002/ajpa.1330100407 A re-evaluation of estimation of stature based on measurements of stature taken during life and of long bones after death journal, March 1958 The Measure and Mismeasure of the Tibia: Implications for Stature Estimation journal, September 1995 Jantz, R. L.; Hunt, David R.; Meadows, Lee Maximum length of the tibia: How did Trotter measure it? Tracking the Sun: Pricing and Design Trends for Distributed Photovoltaic Systems in the United States - 2019 Edition Technical Report Barbose, Galen ; Darghouth, Naim ; Elmallah, Salma ; ... Lawrence Berkeley National Laboratory (LBNL)'s annual Tracking the Sun report summarizes installed prices and other trends among grid-connected, distributed solar photovoltaic (PV) systems in the United States. 1 This edition focuses on systems installed through year-end 2018, with preliminary trends for the first half of 2019. As in years past, the primary emphasis is on describing changes in installed prices over time and variation across projects. This year's report also includes an expanded discussion of other key technology and market trends, along with several other new features, as noted in the text box below. Trends in this report derive frommore » project- level data reported primarily to state agencies and utilities that administer PV incentives, renewable energy credit (REC) registration, or interconnection processes. In total, data were collected and cleaned for 1.6 million individual PV systems, representing 81% of all U.S. distributed PV systems installed through 2018. The analysis of installed prices is based on the subset of roughly 680,000 host-owned systems with available installed price data, of which 127,000 were installed in 2018. A public version of the full dataset is available at trackingthesun.lbl.gov. Numerical results are denoted in direct current (DC) Watts (W) and real 2018 dollars. Non-residential systems are segmented into small vs. large non- residential, based on a cut-off of 100 kW. Distributed PV Project Characteristics. Key technology and market trends based on the full dataset compiled for this report are as follows. • PV systems continue to grow in size, with median sizes in 2018 reaching 6.4 kW for residential systems and 47 kW for non-residential systems. Sizes also vary considerably within each sector, particularly for non-residential systems, for which 20% were larger than 200 kW in 2018. • Module efficiencies continue to grow over time, with a median module efficiency of 18.4% across all systems in the sample in 2018, a full percentage point increase from the prior year. • Module-level power electronics—either microinverters or DC optimizers—have continued to gain share across the sample, representing 85% of residential systems, 65% of small non- residential systems, and 22% of large non-residential systems installed in 2018. • Inverter-loading ratios (ILRs, the ratio of module-to-inverter nameplate ratings) have 1 In the context of this report "distributed PV" includes both residential as well as non-residential rooftop systems and ground-mounted systems smaller than 5 MWAC (or roughly 7 MWDC). An accompanying LBNL report, Utility-Scale Solar, addresses trends in the utility-scale sector, which consists of ground-mounted PV systems larger than 5 MWAC. New Features in This Year's Tracking the Sun • Expanded Discussion of Project Characteristics. This year's report includes additional trends related to distributed PV orientation, inverter loading ratios, and solar-plus-storage. • Focus on Host-Owned Systems for Installed Pricing Analysis. In order to simplify the analysis and discussion, the report now excludes third-party owned systems from its analysis of installed pricing trends, though those systems are included when characterizing broader technology and market trends. • Multi-Variate Regression Analysis. The report now includes an econometric model of installed pricing variation across residential systems installed in 2018 (see Appendix C), complementing the descriptive analysis. 2 Tracking the Sun generally grown over time, and are higher for non-residential systems than for residential systems. In 2018, the median ILR was 1.11 for residential systems with string inverters and 1.16 for those microinverters, while large non-residential systems had a median ILR of 1.24. • Roughly half (52%) of all large non-residential systems in the 2018 sample are ground- mounted, while 7% have tracking. In comparison, 17% of small non-residential systems and just 3% of residential systems are ground-mounted, and negligible shares have tracking. • Panel orientation has become more varied over time, with 57% of systems installed in 2018 facing the south, 23% to the west, and most of the remainder to the east. • A small but increasing share of distributed PV projects are paired with battery storage, typically ranging from 1-5% in 2018 across states in our dataset, though much higher penetrations occurred in Hawaii and in a number of individual utility service territories. • Third-party ownership (TPO) has declined in recent years, dropping to 38% of residential, 14% of small non-residential, and 34% of large non-residential systems in the 2018 sample. • Tax-exempt customers—consisting of schools, government, and nonprofit organizations— make up a disproportionately large share (roughly 20%) of all 2018 non-residential systems. Temporal Trends in Median Installed Prices. The analysis of installed pricing trends in this report focuses primarily on host-owned systems. Key trends in median prices, prior to receipt of any incentives, are as follows. • National median installed prices in 2018 were $3.7/W for residential, $3.0/W for small non- residential, and $2.4/W for large non-residential systems. Other cost and pricing benchmarks tend to be lower than these national median values, and instead align better with 20th percentile values (see Text Box 5 in the main body for further discussion of these issues). • Over the last full year of the analysis period, national median prices fell by $0.2/W (5%) for residential, by $0.2/W (7%) for small non-residential, and by $0.1/W (5%) for large non- residential systems. Those $/W declines are in-line with trends over the past five years. • Over the longer-term, since 2000, installed prices have fallen by $0.5/W per year, on average, encompassing a period of particularly rapid declines (2008-2012) when global module prices rapidly fell. In many states, the long-term drop in (pre-incentive) installed prices has been substantially offset by a corresponding drop in rebates or other incentives. • Preliminary and partial data for the first half of 2019 show roughly a $0.1/W drop in median installed prices compared to the first half of 2018, though no observable drop relative to the second half of 2018. Those trends are based on a subset of states, consisting of larger markets, where price declines have recently slowed compared to other states. • Installed price declines reflect both hardware and soft-cost reductions. Since 2014, following the steep drop in global module prices, roughly 64% of the total decline in residential installed prices is associated with a drop in module and inverter price, while the remaining 36% is due to a drop in soft costs and other balance-of-systems (BoS) costs. For non- residential systems, a slightly higher percentage of total installed price declines is attributable to BoS and soft costs. Variation in Installed Prices. This report highlights the widespread variability in pricing across projects and explores some of the drivers for that variability, focusing primarily on systems installed 3 Tracking the Sun in 2018. The exploration of pricing drivers includes both basic descriptive comparisons as well as a more formal econometric analysis. Key findings include the following. • Installed prices in 2018 ranged from $3.1-4.5/W for residential systems (based on the 20th and 80th percentile levels), from $2.4-4.0/W for small non-residential systems, and from $1.8-3.3/W for large non-residential systems. • Installed prices within each customer segment vary substantially depending on system size, with median prices ranging from $3.3-4.3/W for residential, from $2.7-3.4/W for small non- residential, and from $2.0-3.6/W for large non-residential systems, depending on size. • Installed prices also vary widely across states, with state-level median prices ranging from $2.8-4.4/W for residential, $2.5-3.7/W for small non-residential, and $1.7-2.5/W for large non-residential systems. • Across the top-100 residential installers in 2018, median prices for each individual installer generally ranged from $3.0-5.0/W, with most below $4.0/W. • Median prices are notably higher for systems using premium efficiency modules (>20%) and for systems with microinverters or DC optimizers. Comparisons between residential retrofits and new construction, and comparisons based on mounting configuration, are both less revealing, likely due to relatively small underlying sample sizes. • The multi-variate regression analysis, which focuses on host-owned residential systems installed in 2018, shows relatively substantial effects associated with system size (a $0.8/W range between 20th and 80th percentile system sizes) and with other system-level factors, including those related to module efficiency (+$0.2/W for systems with premium efficiency modules), inverter type (+$0.2/W for systems with either microinverter or DC-optimizers), ground-mounting (+$0.3/W), and new construction (-$0.5/W). • In comparison, the regression analysis found relatively small effects for various market- and installer-related drivers—including variables related to market size (a $0.2/W range between the 20th to 80th percentile values for market size), market concentration (a $0.1/W range), household density (a $0.2/W range), average household income (no effect), and installer experience (no effect). • After controlling for various system-, market-, and installer-level variables, the regression analysis still found substantial residual pricing differences across states (a $1.5/W range), indicating that other, unobserved factors significantly impact installed prices at the state- or local-levels.« less Magnetic-field effects on the fragile antiferromagnetism in YbBiPt Journal Article Ueland, B. G. ; Kreyssig, A. ; Mun, E. D. ; ... - Physical Review B We introduce neutron-diffraction data for the cubic-heavy-fermion YbBiPt that show broad magnetic diffraction peaks due to the fragile short-range antiferromagnetic (AFM) order persist under an applied magnetic-field H. Our results for H⊥ [more » $$\overline{1}10$$] and a temperature of T = 0.14 (1) K reflect that the ($$\frac{1}{2}$$, $$\frac{1}{2}$$, $$\frac{3}{2}$$) magnetic diffraction peak can be described by the same two-peak line shape found for μ 0 H = 0 T below the Néel temperature of T N = 0.4 K . Both components of the peak exist for μ 0 H ≲ 1.4 T , which is well past the AFM phase boundary determined from our new resistivity data. Using neutron-diffraction data taken at T = 0.13(2) K for H ∥ [001] or [110] , we show that domains of short-range AFM order change size throughout the previously determined AFM and non-Fermi liquid regions of the phase diagram, and that the appearance of a magnetic diffraction peak at ($$\frac{1}{2}$$, $$\frac{1}{2}$$, $$\frac{1}{2}$$) at μ 0 H ≈ 0.4 T signals canting of the ordered magnetic moment away from [111] . The continued broadness of the magnetic diffraction peaks under a magnetic field and their persistence across the AFM phase boundary established by detailed transport and thermodynamic experiments present an interesting quandary concerning the nature of YbBiPt's electronic ground state.« less DOI: 10.1103/PhysRevB.99.184431 Climate data, analysis and models for the study of natural variability and anthropogenic change Technical Report Jones, Philip D. Gridded Temperature Under prior/current support, we completed and published (Jones et al., 2012) the fourth major update to our global land dataset of near-surface air temperatures, CRUTEM4. This is one of the most widely used records of the climate system, having been updated, maintained and further developed with DoE support since the 1980s. We have continued to update the CRUTEM4 (Jones et al., 2012) database that is combined with marine data to produce HadCRUT4 (Morice et al., 2012). The emphasis in our use of station temperature data is to access as many land series that have been homogenized by Nationalmore » Meteorological Services (NMSs, including NCDC/NOAA, Asheville, NC). Unlike the three US groups monitoring surface temperatures in a similar way, we do not infill areas that have no or missing data. We can only infill such regions in CRUTEM4 by accessing more station temperature series. During early 2014, we have begun the extensive task of updating as many of these series as possible using data provided by some NMSs and also through a number of research projects and programs around the world. All the station data used in CRUTEM4 have been available since 2009, but in Osborn and Jones (2014) we have made this more usable using a Google Earth interface (http://www.cru.uea.ac.uk/cru/data/crutem/ge/ ). We have recently completed the update of our infilled land multi-variable dataset (CRU TS 3.10, Harris et al., 2014). This additionally produces complete land fields (except for the Antarctic) for temperature, precipitation, diurnal temperature range, vapour pressure and sunshine/cloud. Using this dataset we have calculated sc-PDSI (self-calibrating Palmer Drought Severity Index) data and compared with other PDSI datasets (Trenberth et al., 2014). Also using CRU TS 3.10 and Reanalysis datasets, we showed no overall increase in global temperature variability despite changing regional patterns (Huntingford et al., 2013). Harris et al. (2014) is an update of an earlier dataset (Mitchell and Jones, 2005) which also had earlier DoE support. The earlier dataset has been cited over 1700 times according to ResearcherID on 31/July/2014 and the recent paper has already been cited 22 times. Analyses of Temperature Data Using the ERA-Interim estimate of the absolute surface air temperature of the Earth (instead of in the more normal form of anomalies) we compared the result against estimates we produced in 1999 with earlier DoE support. The two estimates are surprisingly close (differing by a couple of tenths of a degree Celsius), with the average temperature of the world (for 1981-2010) being very close to 14°C (Jones and Harpham, 2013). We have assessed ERA-Interim against station temperatures from manned and automatic weather station measurements across the Antarctic (Jones and Lister, 2014). Agreement is generally excellent across the Antarctic Peninsula and the sparsely sampled western parts of Antarctica. Differences tend to occur over eastern Antarctica where ERA-Interim is biased warm (up to 6°C) in the interior of the continent and biased cool (up to 6°C) for some of the coastal locations. Opportunities presented themselves during 2012 for collaborative work with a couple of Chinese groups. Three papers develop new temperature series for China as a whole and also for the eastern third of China (Wang et al., 2014, Cao et al., 2013 and Zhao et al., 2014). A dataset of ~400 daily Chinese temperature stations has been added to the CRU datasets. The latter paper finds that urban effects are generally about 10% of the long-term warming trend across eastern China. A fourth paper (Wang et al., 2013) illustrates issues with comparisons between reanalyses and surface temperatures across China, a method that has been widely used by some to suggest urban heating effects are much larger in the region. ERA-Interim can be used but NCEP/NCAR comparisons are very dependent on the period analysed. Earlier a new temperature dataset of homogenized records was developed for China (Li et al., 2009). Urbanization has also been addressed for London (Jones and Lister, 2009) where two rural sites have not warmed more than a city centre site since 1900. Additionally, in Ethymiadis and Jones (2010) we show that land air temperatures agree with marine data around coastal areas, further illustrating that urbanization is not a major component of large-scale surface air temperature change. Early instrumental data (before the development of modern thermometer screens) have always been suspected of being biased warm in summer, due to possible direct exposure to the sun. Two studies (Böhm et al., 2010 and Brunet et al., 2010) show this for the Greater Alpine Region (GAR) and for mainland Spain respectively. The issue is important before about 1870 in the GAR and before about 1900 in Spain. After correction for the problems, summer temperature estimates before these dates are cooler by about 0.4°C. In Jones and Wigley (2010), we discussed the importance of the biases in global temperature estimation. Exposure and to a lesser extent urbanization are the most important biases for the land areas, but both are dwarfed by the necessary adjustments for bucket SST measurements before about 1950. Individual station homogeneity is only important at the local scale. This was additionally illustrated by Hawkins and Jones (2013) where we replicated the temperature record developed by Guy Stewart Callendar in papers in 1938 and 1961. Analyses of Daily Climate Data Work here indicates that ERA-Interim (at least in Europe, Cornes and Jones, 2013, discussed in more detail in this proposal) can be used to monitor extremes (using the ETCCDI software – see Zhang et al., 2011). Additionally, also as a result of Chinese collaboration, a new method of daily temperature homogenization has been developed (Li et al., 2014). In Cornes and Jones (2011) we assessed storm activity in the northeast Atlantic region using daily gridded data. Even though the grid resolution is coarse (5° by 5° lat/long) the changes in storm activity are similar to those developed from the pressure triangle approach with station data. Analyses of humidity and pressure data In Simmons et al. (2010) we showed a reduction in relative humidity over low-latitude and mid-latitude land areas for the 10 years to 2008, based on monthly anomalies of surface air temperature and humidity from ECMWF reanalyses (ERA-40 and ERA-Interim) and our earlier land-only dataset (CRUTEM3) and synoptic humidity observations (HadCRUH). Updates of this station-based humidity dataset (now called HadISDH) extend the record, showing continued reductions (Willett et al., 2013). Analyses of Proxy Temperature Data In Vinther et al. (2010), relationships between the seasonal stable isotope data from Greenland Ice Cores and Greenland and Icelandic instrumental temperatures were investigated for the past 150-200 years. The winter season stable isotope data are found to be influenced by the North Atlantic Oscillation (NAO) and very closely related to SW Greenland temperatures. The summer season stable isotope data display higher correlations with Icelandic summer temperatures and North Atlantic SST conditions than with local SW Greenland temperatures. In Jones et al. (2014) we use these winter isotope reconstructions to show the expected inverse correlation (due to the NAO) with winter-season documentary reconstructions from the Netherlands and Sweden over the last 800 years. Finally, in this section Jones et al. (2013) shows the agreement between tree-ring width measurements from Northern Sweden and Finland and an assessment of the link to explosive volcanic eruptions. An instrumental record for the region in the early 19th century indicates that the summer of 1816 was only slightly below normal, explaining why this year has normal growth for both ring width and density. GCM/RCM/Reanalysis Evaluation In this section we have intercompared daily temperature extremes across Europe in Cornes and Jones (2013) using station data, E-OBS and ERA-Interim. We have additionally considered the impact of the urban issue on the global scale using the results of the Compo et al. (2011) Reanalyses, 20CR. These only make use of SST and station pressure data. Across the world's land areas, they indicate similar warming since 1900 to that which has occurred (Compo et al., 2013), again illustrating that urbanization is not the cause of the long-term warming. Changes in HadCRUH global land surface specific humidity and CRUTEM3 surface temperatures from 1973 to 1999 were compared to the CMIP3 archive of climate model simulations with 20th Century forcings (Willett et al., 2010). The models reproduce the magnitude of observed interannual variance over all large regions. Observed and modelled trends and temperature-humidity relationships are comparable with the exception of the extra-tropical Southern Hemisphere where observations exhibit no trend but models exhibit moistening.« less Utility seeks openness on issue of exposure Journal Article Kinkead, R.W. The coupling [open quotes]of scientific uncertainty with public alarm creates a quandary of U.S. electric utilities[close quotes] on the electromagnetic-field issues, says Robert W. Kinkead of the Public Service Electric and Gas Company of Newark, New Jersey. Techniques to reduce exposure levels are available, he explains, but they can be very expensive if exposure is to be brought down low enough to satisfy some critics. The issue also creates a quandary in communications: Since few answers are available, is it better for utilities to maintain a low public profile, responding only when asked, or is it better for them tomore » step forward with comprehensive public communications programs Most utilities tend toward the former approach, but Public Service Electric and Gas Company (PSE G) adopted the latter. If fashioned programs specially designed to inform its employees, opinion leaders, and the public. The goal, explains Kinkead, is to establish PSE G as a credible source of electromagnetic-field information, not to [open quotes]sidestep the unavoidable controversy that will continue to surround the issue until research produces more definite information about its effects.[close quotes]« less Audit Report on Management Controls over the Use of Service Contracts at the Office of River Protection Technical Report None The Department of Energy's (Department) Office of River Protection (ORP) is responsible for the storage, treatment, and disposal of over 53 million gallons of highly radioactive waste from over 40 years of plutonium production at the Hanford Site. Because of the diversity, complexity, and large scope of its mission, coupled with its small staff, ORP told us that it has found it necessary to engage in service contracts to obtain consulting services, technical expertise, and support staff. Federal policy generally permits contractors to perform a wide range of support service activities, including, in most situations, the drafting of Government documentsmore » subject to the review and approval of Federal employees. Federal policy issued by the Office of Management and Budget, however, prohibits contractors from drafting agency responses to Congressional inquiries and reports issued by the Office of Inspector General and Government Accountability Office (GAO) because they are so closely related to the public interest and provide the appearance of private influence. To provide a majority of its needed services, ORP issued a Blanket Purchase Agreement to Project Assistance Corporation (PAC) in 2003. Through the Blanket Purchase Agreement, ORP acquired services in the areas of project management, risk assessment, program assessment, quality assurance, safety, cost and schedule estimating, budgeting and finance, and engineering. PAC has, in turn, subcontracted with various other firms to obtain some of the services needed by ORP. From 2005 to 2008, the total annual cost for the contract with PAC had grown from $4.7 million to $9.2 million. Because of the extent of the services provided and growing costs of the contract, we conducted this review to determine whether ORP appropriately administered its contract with the Project Assistance Corporation. Our review disclosed that, in some instances, ORP had not appropriately administered all work performed under the PAC contract. Specifically, ORP allowed PAC employees to perform work that was inherently governmental and created situations where a potential conflict of interest occurred. Specifically: (1) ORP assigned PAC employees responsibility for providing information and responses to Congressional inquiries and reports issued by the GAO and the Department of Energy's Office of Inspector General (OIG); and (2) PAC employees were also allowed to perform functions that created potential conflicts of interest. ORP permitted PAC employees, for example, to develop statements of work and approve funding of work to be performed under PAC's own contract. We concluded that these problems occurred, at least in part, because ORP had not established controls necessary to effectively administer the PAC contract. Federal procurement regulations recommended that agencies provide additional management controls over contractors whose work has the potential to influence the action of government officials. ORP, however, had not implemented the controls specifically recommended in Federal policy guidance for administering contracts, including: (1) Performing conflict of interest reviews; and (2) Separating contractor and Department employees either physically or organizationally. By not effectively administering its contract with PAC, ORP increased the risk that decisions based on work performed by the contractor may not have been made in the best interests of the Department. For example, ORP increased the risk that approved work would be unnecessary or too costly. As we also recently noted in our report on 'Management Challenges at the Department of Energy' (DOE/IG-0808, December 2008), contract administration issues such as those discussed in this report remain a significant vulnerability. Continued efforts to improve this area are vitally important since the risk that contractors receive payments for unallowable costs could also increase as the Department expands its contracting activities under the American Recovery and Reinvestment Act. To its credit, however, ORP has recognized that there are weaknesses in its oversight of the PAC contract and is in the process of taking certain corrective actions. ORP indicated that it had reassigned responsibility to a Federal employee for responding to Congressional requests, GAO reviews, and OIG reports, and, planned to physically separate PAC employees from their government counterparts. While positive, those actions do not sufficiently address the issues identified in our report. Accordingly, we have made several recommendations designed to strengthen internal controls over this area.« less
CommonCrawl
← Challenging the 2 degree target Evidence of deep ocean cooling? → Posted on October 4, 2014 by curryja | 454 Comments Some things that caught my eye this past week. Raymond Pierrehumbert responds to Stephen Koonin's WSJ essay; thinks "climate science is settled enough" and thinks Koonin's argument in @WSJ stems from surfing willfully ignorant skeptic's blogs [link] In cause you are dying to find out how my exchange with Greg Laden turned out, over the favorited Mark Steyn tweet, see this synthesis by twitchy. The best tweet from my perspective was Euphonius Bugnuts: Well, you gotta admit, Greg's attribution logic for you being a denier is tighter than IPCC's for AGW. Dinner at Nic's. Nic Lewis hosted a dinner for skeptics and climate scientists, that made the Guardian. Tamsin defends her host in the comments: "Nic Lewis is quite literally a gentleman and a scholar". Both scientists and skeptics in the UK seem much more civil than in the US. Let's not Reinvent the Flat Tire – smart, adaptive thinking from the World Bank [link] California drought – linked to human-caused climate change? [link] Politics isn't just about manipulating people, its about learning from them – a review of Jonathan Haidt's The Righteous Mind [link] New York Magazine: How to convince conservatives on climate change [link] How biased are scientists? Great post by @jonmbutterworth on Bayes and belief [link] Antarctic sea-ice hits new high as scientists puzzle over the cause [link] How the #oil and #gas boom is changing America – really great interview with Michael Levi [link] Not just a problem for alligators – Climate Change Could Alter the Human Male-Female Ratio [link] Bigger surge than Sandy – The freak 1821 hurricane,why it should worry coastal residents [link] James Annan responds to Lewis/Curry paper: Why not try the Lewis/Curry climate sensitivity method on a GCM? [link] Talking sense about climate in India [link] Chip Knappenberger tweets: Cargo Ship Makes 1st-Ever Solo Trip Through NW Passage [link] "Thru fuel savings,GHG emissions reduced by 1,300 tons " Hmmm. The PAGES2k group rediscovers the medieval warm period. ClimateAudit Nassim Taleb: My (civilized) debate with Sornette: diverging views of risks,but not on science/probability: [link] China's one-word anster to Obama's climate plan [link] Oliver Geden: Now even the @guardian posts reflections on Plan B for int #climate policy [link] JustinHGillis on attribution of climate & weather extremes (read past the headline) [link] … Joke of the week: WH PRES SECRETARY tweets: Titanic star Leonardo DiCaprio led a Global Warming awareness march. You'd think he'd be ok with fewer icebergs. David Wojick | October 4, 2014 at 10:29 am | Anyone who thinks that skeptics are willfully ignorant has opted out of science. David L. Hagen | October 4, 2014 at 6:52 pm | The review of Lewis' dinner was remarkable for the Guardian: >"When people say the science is settled, they mean there is such as thing as anthropogenic climate change. Where it's not settled is the rate of change, how much it's going to warm, how fast it'll warm under different levels of CO2 and exactly how it will affect different regions," says Ted Shepherd, a climate scientist at Reading University and Grantham Chair in Climate Science. . . . A survey of the table at the end of the meal revealed that the views of scientists and sceptics on the level of "transient climate response" – or how much the world would warm should levels of pre-industrial CO2 be doubled – differed only by around 0.4C, recounts journalist David Rose. Sounds like they did not have any skeptics at the table. If the IPCC mean estimate is 3 degrees C does this mean that the supposed skeptics estimated 2.6 degrees? Or that no one at the table guessed 3 C? Does he say what this 0.4 C range was? This is certainly not the range in the general debate, which is more like from 0 to 6 degrees C. Notice the bogus distinction between scientists and skeptics. avid Wojick Nic Lewis and Judith Curry just posted a paper calculating the mean ECS as 1.64 K. Lewis N and Curry J A: The implications for climate sensitivity of AR5 forcing and heat uptake estimates, Climate Dynamics (2014), PDF using 1859–1882 for the base period and 1995–2011 for the final period, thus avoiding major volcanic activity, median estimates are derived for ECS of 1.64 K and for TCR of 1.33 K. ECS 17–83% and 5–95% uncertainty ranges are 1.25–2.45 K and 1.05–4.05 K; the corresponding TCR ranges are 1.05–1.80 K and 0.90–2.50 K. Curry posted: Lewis & Curry Climate Sensitivity, Uncertainty The implications for climate sensitivity of AR5 forcing and heat uptake estimates Ted Shepherd is coauthor of the presentation showing climate sensitivity range down to 2: WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity This summarizes the IPCC models. Thus, I surmise that the 0.4 C range is from 1.64 to 2.04. David Wojick | October 5, 2014 at 3:12 pm | Thanks DH, but most alarmist warmers estimate CS at well above 2 so I guess there were no alarm-warmers there. And lots of skeptics think it well below 1.6, including 0 (or in my case that CS is an scientifically incoherent concept which therefore has no value), so it seems there were no skeptics there either. Looks like a table full of lukewarmers. Perhaps the Guardian should have said that. Note under the Chatham House Rule "Lukewarmer" doesn't sound like "bad news" that "sells"! rhhardin | October 4, 2014 at 10:48 am | "Chip Knappenberger tweets: Cargo Ship Makes 1st-Ever Solo Trip Through NW Passage "Thru fuel savings,GHG emissions reduced by 1,300 tons " It's natural climate system adaptation. PA | October 4, 2014 at 11:30 am | The Nunavik is a Polar Class 4/ice class ICE-15 ship. http://no.cyclopaedia.net/wiki/Bay-class_icebreaking_tug "MV Nunavik the newest icebreaker to hit Arctic waters " Armored cargo vessel. Not overly impressed. http://www.vancouvermaritimemuseum.com/permanent-exhibit/st-roch-national-historic-site The wooden ship St. Roch did it a couple of times in the 1940-1944 era with a best time of 86 days. popesclimatetheory | October 4, 2014 at 12:56 pm | CO2 makes green things grow better with less water. Anything that reduces GHG emissions is a very bad thing for life on earth. pauldd | October 4, 2014 at 2:25 pm | "It's natural climate system adaptation" Yet another example of a negative feedback. A fan of *MORE* discourse | October 4, 2014 at 3:04 pm | Among the small (VERY small!) vessels who accomplished the Canada-side Northwest Passage this year are Novara, Arctic Tern, and Altan Girl (none required icebreaker assistance). The Russia-side Northern Sea Route has been wide open for weeks, with hundreds of vessels transiting. It is a pleasure to supply Climate Etc readers with accurate information regarding the melting Arctic sea-routes! Tonyb | October 4, 2014 at 4:57 pm | I know you have problems with arctic maritime history so this article might help with the northern sea route which was opened in the 1930's and reached its peak in 1987 before declining due to the demise of the USSR. http://www.chathamhouse.org/sites/files/chathamhouse/public/International%20Affairs/2012/88_1/88_1blunden.pdf The route was open most years in the 1930's and was of course used by Allied convoys during word war two when people such as my brave neighbour sailed through the arctic to deliver oil and supplies to Russia for which he got little thanks from them at the time. I hope to visit the Scott polar institute in Cambridge shortly to follow up my previous research there which seemed to indicate the Northern sea route could have been open for around fifty years or so during the 16 th century. That is of course anecdotal at present but interesting nonetheless tty | October 5, 2014 at 7:20 pm | 22 boats tried to get through the Northwest Passage. 6 succeded (plus 2 who had wintered and made it through in the second year). This is ane even lower success rate than last year. David L. Hagen | October 4, 2014 at 10:55 am | Trained Physicist? Could Dr. Ben Santer rise towards the standard of a trained Physicist and begin to understand models and uncertainties like Physicist Steven E. Koonin? Faustino | October 4, 2014 at 11:02 am | The Knappenberger link goes to the 1821 hurricane story. nutso fasst | October 4, 2014 at 11:23 am | "Cargo Ship Makes 1st-Ever Solo Trip Through NW Passage" Misleading news articles fail to note that the MV Nunavik is a Polar Class 4 vessel, "capable of year-round operation in thick first-year ice." The ship did not need an icebreaker escort because it IS an icebreaker. http://www.cbc.ca/news/canada/north/mv-nunavik-the-newest-icebreaker-to-hit-arctic-waters-1.2583861 http://www.nunatsiaqonline.ca/stories/article/65674mv_nunavik_from_northern_quebec_to_china_via_the_northwest_passage/ The passage of this ship does not depend on an arctic ice death spiral. Furthermore the plan is that Nunavik is to make ONE passage per year through the Northwest passage. In September when ise is at minimum. With a 36,000 ton displacement, 40,000 horsepower and Icebreaker bow it can handle ice up to 5 feet thick. They had no real problem this trip, but the captain blogged that they had to force an ice barrier in Prince of Wales Sound that would have stopped virtually any other merchantman. What I don't understand how they plan to make any money the rest of the year with such a grossly overpowered ship. There isn´t really that much demand for icebreaking ore-carriers. The only other runs I can think of are Noril'sk-Murmansk and Luleå-Rotterdam. Wagathon | October 4, 2014 at 11:33 am | Computer modeling of complex systems is as much an art as a science… global climate models describe the Earth on a grid that is currently limited by computer capabilities to a resolution of no finer than 60 miles… But processes such as cloud formation, turbulence and rain all happen on much smaller scales. These critical processes then appear in the model only through adjustable assumptions that specify, for example, how the average cloud cover depends on a grid box's average temperature and humidity. In a given model, dozens of such assumptions must be adjusted ("tuned," in the jargon of modelers)… For the latest IPCC report (September 2013), its Working Group I, which focuses on physical science, uses an ensemble of some 55 different models. Although most of these models are tuned to reproduce the gross features of the Earth's climate, the marked differences in their details and projections reflect all of the limitations that I have described… The models differ in their descriptions of the past century's global average surface temperature by more than three times the entire warming recorded during that time. Such mismatches are also present in many other basic climate factors, including rainfall, which is fundamental to the atmosphere's energy balance. As a result, the models give widely varying descriptions of the climate's inner workings. Since they disagree so markedly, no more than one of them can be right. ~Steven Koonin, WSJ, Climate Science Is Not Settled The clinker is, we're just not that big a deal. The left refers to CO2 as a poison or a climate pollutant to make humanity's contribution to the ecosphere nothing more than a big and dirty activity that nature is powerless to deal with. The Left demands that we must assume that, human influences could have serious consequences for the climate, whereas Koonin says, "they are physically small in relation to the climate system as a whole," even when looking down the road 100 years. "For example, human additions to carbon dioxide in the atmosphere by the middle of the 21st century are expected to directly shift the atmosphere's natural greenhouse effect by only 1% to 2%. Since the climate system is highly variable on its own, that smallness sets a very high bar for confidently projecting the consequences of human influences. (Koonin, Ibid.) Jim D | October 4, 2014 at 11:41 am | Pierrehumbert's demolition of Koonin is worth reading. The link was to only page 2. Page 1 is here. http://www.slate.com/articles/health_and_science/science/2014/10/the_wall_street_journal_and_steve_koonin_the_new_face_of_climate_change.html "The idea that Climate science is settled," says Steven Koonin, "runs through today's popular and policy discussions. Unfortunately, that claim is misguided." The Left's problem with Koonin is not the message but the messenger who must now be branded, "denier." The computational physicist credentials of Koonin who served as a professor and provost at Caltech, nor being green and a fan of renewables, are in question. Rather, his DOE job as undersecretary of science in the Obama administration lands Koonin squarely in the camp of Leftist global warming defectors −e.g., a voice of reason that's not easily silenced and will be reckoned with by all but Democrat partisan extremists who will do whatever they can to suppress skepticism and legitimate climate science. jim2 | October 5, 2014 at 5:46 pm | Willard's boy believes there exists someone who understands climate science. That make's Pierrehumbert look pretty ignorant. Wagathon | October 5, 2014 at 6:37 pm | What greater falsification of AGW theory could there be than to see liberal fascists label skeptics, "deniers?" curryja | October 4, 2014 at 11:54 am | What Pierrehumbert doesn't get is that Koonin took a very hard look at the evidence in the IPCC (he largely wrote this document), then listened to present ions by myself, held, collins, santer, linden, christy and questioned us at length (see this transcript). And his conclusions are in the WSJ. Dismissing Koonin's remarks as plucked from skeptics blogs misses the whole point – a highly regarded physicist (a democrat to boot) takes a serious look at the evidence in the IPCC and ends up, well pretty much agreeing with moi. Matthew R Marler | October 4, 2014 at 12:03 pm | curryja: present ions I like that. I can imagine listening to "present ions". So 21st century. AK | October 4, 2014 at 12:29 pm | I suspect there's more to it than that: the phrase "evidence is incontrovertible" hardly belongs in a statement by any scientific society. nutso fasst | October 4, 2014 at 1:21 pm | While it may be true that present ions have electrifying conversations, I'd really like to know what they're saying when they're out of earshot. pokerguy (aka al neipris) | October 4, 2014 at 1:28 pm | "Dismissing Koonin's remarks as plucked from skeptics blogs misses the whole point –" Well, yes, but "misses" implies some sort of unintentional error. I find myself saying the same thing over and over these days, that this is not about the science. If nothing else is clear, that should be. pokerguy | October 4, 2014 at 1:41 pm | "the phrase "evidence is incontrovertible" hardly belongs in a statement by any scientific society." Why one might go so far as to say it's,,.oh what's that term the alarmists are so fond of…oh yes…."anti-science." Political Junkie | October 4, 2014 at 3:03 pm | Are you sure she said "ion:" Positive! GaryM | October 5, 2014 at 1:26 am | Not only are skeptics to be ignored, but those who read or listen to them are to be culled from the herd as well. We can't have our sheep paying attention to banned thought. Jim D | October 5, 2014 at 1:45 am | Their 2007 statement was "The evidence is incontrovertible. Global warming is occurring". I think today this is less incontrovertible, even by skeptics who disagreed back then, so it could appear in the new statement too. Koonin says "We know, for instance, that during the 20th century the Earth's global average surface temperature rose 1.4 degrees Fahrenheit." So it is still incontrovertible, but the difference is that the skeptics have shifted since 2007 to allow this to be said. Matthew R Marler | October 5, 2014 at 2:00 am | Jim D: Their 2007 statement was "The evidence is incontrovertible. Global warming is occurring". Believers continue to disparage the distinction between "has warmed" and "is warming". The evidence for "global warming is occurring" is definitely controvertible. Jonathan Abbott | October 5, 2014 at 4:17 am | Jim D, in 2007 it was already obvious to anyone who wanted to see that global warming had stopped. Hence the statement was incorrect. OK, so now for skeptics, it comes down to the meaning of "is". Interesting. Perhaps it is better to phrase it the way Koonin did, even including a number. JimD: "Their 2007 statement was "The evidence is incontrovertible. Global warming is occurring". I think today this is less incontrovertible, even by skeptics who disagreed back then, so it could appear in the new statement too. " Fine. Show me some scientists that will bet their paychecks that 2020 will be warmer than 2014 (by UAH or RSS – the untampered temperature standards, or raw data temperature). UAH is showing a robust rise rate of over 0.1 C per decade despite the "pause". I really don't know what the skeptics are talking about. http://www.woodfortrees.org/plot/uah/mean:36/plot/uah/trend Harold | October 5, 2014 at 12:03 pm | That's the problem with this issue. Too many charged particles. willard (@nevaudit) | October 5, 2014 at 4:37 pm | > Dismissing Koonin's remarks as plucked from skeptics blogs misses the whole point – a highly regarded physicist (a democrat to boot) takes a serious look at the evidence in the IPCC and ends up, well pretty much agreeing with moi. If that's the point, then I'm not sure Pierrehumbert is that far off the mark: Steve Koonin is the answer to a troublesome question facing the Journal's opinion page editors: What you do if you want to continue obstructing progress on global warming pollution, but your usual stable of tame skeptics is starting to die off (Fred Seitz), retire from active research (Dick Lindzen), or discredit itself through serial scientific errors (John Christy) or by taking fanatical and manifestly untenable positions (Heartland Institute)? That puts the editors in quite a pickle. The Wall Street Journal evidently has high hopes for promoting Koonin as a prominent new voice for inaction, having lavished on him 2,000 words and front-page Saturday exposure outside the Journal's paywall. An interesting plan B might involve megaphones like Twitchy. Pierrehumbert may be right, Willard. So what? phatboy | October 5, 2014 at 5:00 pm | Yes, Koonin is either right or he's wrong. The fact that the warmists have to dig up the dirt appears to be a tacit admission that he's right. Even better: Koonin has constructed a narrative that is calculated to make people take notice even if they wouldn't ordinarily trust anything the Wall Street Journal published on global warming: I'm a physicist bringing my brilliance and outside perspective to the backwater of climate science! (He was a professor of physics, and later provost, at Caltech.) I'm green! (He was chief scientist for BP, the oil firm that likes to tout itself as the "beyond petroleum" company, and he was involved with renewables there, among other things.) I've got true-blue Democratic credentials! (He was undersecretary for science in the Department of Energy during Obama's first term.) The "whole point" indeed. Steven Mosher | October 5, 2014 at 5:16 pm | nice double game that Ray plays. if you lack credentials, then attack the lack of credentials if you have credentials, then attack the ploy of using someone with credentials. In the end Koonin is not a TRUE climate scientist. nice. self sealing …but our Willie just can't see it The "whole point" may be a narrative: But there are flaws in this narrative. Being a smart physicist can just give you more elaborate ways to delude yourself and others, along with the arrogance to think you can do so without taking the time to really understand the subject you are discussing. Freeman Dyson is a famous example. Koonin's role in the Department of Energy was marginal and largely powerless, leading ultimately to his resignation. BP's "beyond petroleum" vision evidently includes tar sands (both extraction and refining) and petcoke (arguably the worst fossil fuel of all). And anyway, how green can you be if you're the company that gave us the Deepwater Horizon disaster? No double bind there. Nice try, though. horse … dead … a … flogging Rearrange the words! Even if Lindzen never writes another paper, he will still be a force in climate debate. As to Christy, a problem was found in the sat temp calcs, he fixed it. Isn't that what scientists are supposed to do? Unlike Kaufman who has had problems with his paper pointed out with precision by SM, then "corrected" it, but alas, it still has problems. So, Willard, the hack doesn't hold water. http://climateaudit.org/2014/10/04/pages2k-more-upside-down/ PA | October 5, 2014 at 6:10 pm | JimD "UAH is showing a robust rise rate of over 0.1 C per decade despite the "pause". I really don't know what the skeptics are talking about." For the last 10 years (since September 2004) the change is approximately 0 (zero). Most people start from 1997 for the pause since the strong 1997/1998 El Nino was followed by a strong La Nina. I don't see the robustness of 10 years of zero (or 17 if you go back to 1997). Personally I believe in 2024 we will have 20 years of zero, since I'm sort of convinced CO2 has some sort of effect. But some solid cooling could persuade me otherwise. Jim D | October 5, 2014 at 6:37 pm | PA, 10 years is never robust. You can get downward trends with other carefully selected 15 year periods such as 1980-1995, but the overall trend is there, and the deviation from that is as small as ever and getting smaller, if anything. > Rearrange the words! All the words follow one after another in the op-ed. The first quote was the second paragraph. The second quote was the third. The third quote was the fourth paragraph. Pierrehumbert does not appear to miss what Judge Judy claims is the "whole point." Some might wish to restrict this "whole point" to the fact that Koonin agrees with her. But even then Pierrehumbert may have covered this possibility. stevepostrel | October 5, 2014 at 8:42 pm | I had read good things about Pierrehumbert over the years. The Slate piece undid that effectively at the character level–the absurd, insinuative, ad hominem approach makes him untrustworthy in anything he writes on this topic, no matter how technical it appears. If he's willing to play by these rules in public communication, where he's easy to catch, then there's no reason to trust him on more-opaque technical issues where more trust would be required absent a detailed analysis of each claim. Koonin hasn't always covered himself with glory in his own public dealings with those whose claims he disputed (see cold fusion), but two wrongs don't make a right (and Koonin's intellectual position, if not his behavior, had very strong grounding). Pierrehumbert's thinly disguised resentment of the higher status of scientists such as Koonin and Dyson is as unbecoming as was Koonin's disdain for mere chemists treading in physics years ago. captdallas2 0.8 +/- 0.2 | October 4, 2014 at 11:59 am | How has Ray been coming along with cloud forcing? I haven't heard much since the Statute of Liberty buried in the sand presentation. Groty | October 4, 2014 at 12:38 pm | Page one may have been omitted by accident. But I can understand why it may have been intentionally omitted. The first page of the rant was several paragraphs of verbiage intended to discredit Koonin by attacking his character, where he used to work, motives, and the medium in which he chose to publish his essay. Pretty prototypical left wing stuff that isn't relevant to the scientific arguments he made. The second page had some actual criticism of WHAT Koonin actually wrote about. popesclimatetheory | October 4, 2014 at 1:12 pm | More Wall Street Garbage When the science gets settled "enough", climate model output will look like real climate data. They are not there yet!!!!!!!!!!!!!!!!!!!!!!!!! Climate Model output does not agree with real earth data. Climate Model output does not resemble real earth data. The climate alarmists, and their followers, do not appear to know or even suspect that this is a serious problem with consensus theory and models. It will only get worse for them as the model output and data diverge, more and more, every year. On the Skeptic side, we have more than one theory. As more data becomes available, it will support some theory above the others and something better will likely come soon. Soon could be days, weeks, months or years. I really suspect it will not take decades more. The data is getting better, resisting consensus based corrections, all the time. Don Monfort | October 4, 2014 at 8:13 pm | The self-inflicted demolition of jimmy dee's credibility continues. Tom Fuller | October 4, 2014 at 9:13 pm | Mr. Pierre-Humbert has a lot of respect in the circle of those who are most alarmed about climate change and his arguments need to be taken seriously. However, this piece suffers from the most common of Alarmist fallacies, that attacking the reputation or standing of your opponent is more important than countering his/her arguments. Mr. Pierre-Humbert spends over one page of a two-page article trying to deligitimize Mr. Koonin. When he finally gets around to Koonin's arguments, it's easy to see why. They basically amount to 'Koonin's measuring A instead of B' or he's counting from Date A instead of Date B.' But Koonin didn't do the measuring or counting. He (exactly like the IPCC) is assessing the measuring and counting done by others. As a brief aside, Pierre-Humbert notes a doubling of the rate of sea-level in the century before AGW is thought to have started and seems to think that's an effective argument on the issue because the rate of sea-level rise 'doubled' in the century afterwards. The Alarmist Brigade would rather call their opponents senile or out of touch with the mainstream literature than engage with the (best of) their arguments. A lot of foolishness is put forth by skeptics (Iron Sun, Sky Dragon, etc.) But the best of their arguments need to be considered seriously. After all, a similar amount of nonsense issues forth from the Alarmist camp as well. One of the reasons for their ad hominem attacks is that the best of the skeptic/lukewarmer arguments are extremely tough to counter. All the more reason for Mr. Pierre-Humbert to save time and energy by abandoning his attacks on Mr. Koonin's reputation and qualifications. Maybe he just doesn't have anything else. I find it funny how many alarmists, here and in general, dismiss the types of arguments Koonin made as "talking points". But why are they talking points? Because they were gotten from real scientists who do real science, and they are "extremely tough to counter." The fact that they're "talking points" doesn't mean they're wrong, it's the fact that they're so "extremely tough to counter" (i.e. probably right, at least in context) that makes them good talking points. I'm reduced to repeating myself trying (probably without success) to get my point across. Sigh. DocMartyn | October 4, 2014 at 11:45 am | "Greg Laden @gregladen This tweet is simply more proof that you are not interested in civil conversation. UR a liar and UR dangerious to the future. @curryja 1:23 PM – 2 Oct 2014 Coon Rapids, MN, United States" Surely this is actionable? Wagathon | October 4, 2014 at 12:01 pm | Why an atheist radio station would want to interview the prophet of a millenarial cult like Mannatollah Mike is a mystery to me. ~Mark Steyn (Mann is an island) captdallas2 0.8 +/- 0.2 | October 4, 2014 at 12:16 pm | I don't think so. I believe there is some legal issue, mens rea? Wasn't it Steven Mosher that said that if you trust the models you need your head examined? DocMartyn | October 4, 2014 at 12:31 pm | 'UR a li@r' is a bit specific. Perhaps he meant 'lair', but misspelled it captdallas2 0.8 +/- 0.2 | October 4, 2014 at 2:51 pm | Doc, there is a pretty good chance that Greg's cheese slipped off his cracker. Taking "action" against someone that confuses bookmarking with actual "favoritism" is a waste of time or has an emotional meltdown at the AGU conference is not going to do much good. Just try to remember him back in the good old days when he worn his jester's hat proudly. Trust the model for sensitivity was my exact position aaron | October 5, 2014 at 9:27 pm | Wheel is spinning, but the hamster is dead. She may well be dangerious, whatever that means. I think he means HIS future. …."dangerous to the future." Why is it that just about every one of these guys is borderline illiterate. Such drudges. Theo Goodwin | October 6, 2014 at 5:06 pm | This tweet is simply more proof that you are not interested in civil conversation limited to consensus climate scientists. UR a liar and UR dangerious to the future of consensus climate science. @curryja Fixed it. Matthew R Marler | October 4, 2014 at 11:59 am | Here is an interesting comment from the ClimateAudit post on the pages2k revision: They show the following diagram of changes – all in the direction of increasing MWP warmth relative to modern warmth in their reconstruction. These are large changes from seemingly simple changes in individual proxies – a longstanding CA theme. It is "well known" that PC estimation is "unstable", meaning small changes in data produce surprisingly large changes in the obtained PC coefficients. It is one of the reasons why the selection of time series for inclusion/exclusion is so important. I put "well known" in quotes because the effect can be surprising in actual cases even to people who know of the problem, and Mann and others in their writings show what might be called an inconsistent awareness of the problem. Brandon Shollenberger | October 4, 2014 at 12:16 pm | Er, yeah, but PCA wasn't used for this. Why talk only about the effect on a methodology which wasn't used? Matthew R Marler | October 4, 2014 at 4:03 pm | Brandon Shollenberger: Er, yeah, but PCA wasn't used for this. Why talk only about the effect on a methodology which wasn't used? It just seemed interesting. RiHo08 | October 4, 2014 at 12:03 pm | Nick Lewis has a plot of Arctic Sea Ice which he posted on Lucia's Blackboard this last September: http://moyhu.blogspot.com/p/latest-ice-and-temperature-data.html#fig1 The graphs are updated daily and show this year (2014) and the current nadir and recovery of Arctic Sea Ice. Also graphed are other years of Arctic Sea Ice so one can compare the current 2014 with other years. In reading the Curry link regarding Antarctic Sea Ice the tone of the writers was that while the current record Antarctic Sea Ice extent still needed some work to explain, that the scientific understanding for the Arctic Sea Ice extent was already a known: (AGW) natch. Nick Lewis supplies a lot of data that is useful, but for the life of me, I can not understand why this year's Arctic Sea Ice extent is greater than five previous years during the satellite era. Has anyone seen a plausible explanation why there has been a recovery of Arctic Sea Ice Extent? This is especially perplexing to me in the face of 2013 being the hottest year ever due to global warming which made it so hot that an Australian tennis tournament had to be postponed for a day? That would be Nick Stokes. RiHo08 | October 4, 2014 at 1:03 pm | Capt'nDallas Of course you are right! My fault of perseveration on names. My apologies to both Nick Lewis and Nick Stokes. When the Arctic Sea Ice sets a low record, that causes a lot more snowfall that prevents the next few years from being warm enough. The new low record will likely be set, but it needs a few years to recover from the more snow fall that happens after the record low years. You can look at the data. Sea ice gets lower, lower, lowest and then a lot higher. It snows more when the oceans are more open. Ragnaar | October 4, 2014 at 3:37 pm | "Has anyone seen a plausible explanation why there has been a recovery of Arctic Sea Ice Extent?" Some surface and near surface regions of the Arctic ocean have cooled promoting sea ice formation. That's how I read what Wyatt writes: http://www.wyattonearth.net/thestadiumwave.html As the Arctic ocean loses ice, more heat should transfer from that ocean into the atmosphere. If that sea ice comes back, it should cool the local atmosphere in the short term. Ragnaar Thank you for the link to Marcia Wyatt's explanation of the "stadium wave". I do need to read and re-read explanations of a scientific publication stated in slightly different ways for me to slowly comprehend what is being said. Are you saying that the explanation why the Arctic Sea Ice Extent appears to be recovering is that the various indices captured in the stadium wave hypothesis are cycling back to have Arctic Sea Ice recover? Would the prediction then be: Arctic Sea Ice will recover to what it had been some, say 40 or 50 years ago? Faustino | October 4, 2014 at 4:57 pm | RiHo, an Australian tennis postponement does not necessarily indicate historically extreme temperatures, but a different attitude to demands on players and attendant health risks. Cf sliding roofs on courts. RiHo08 | October 5, 2014 at 11:58 am | Said somewhat tongue-in-cheek: a brief sports interruption has the same scientific weight as utterances from consensus gurus. beththeserf | October 4, 2014 at 7:58 pm | Some say the world will end in fire, Some say in ice. vhttp://www.carbonbrief.org/media/337445/nsidc_antarcticseaiceextent_22sep14.png IPCC say, ' t' is a puzzle.' Say, if the IPCC is puzzled, that's a first! Beththeserf I tried to open the link and got a message that there was no application to open the link. Do you have another avenue to this source? beththeserf | October 5, 2014 at 10:20 pm | Jest google http://www..carbonbrief.org then click on ter thread, 'Antarctic sea ice hits new high' Dontcha know there is always a "yes…but" in the Climate Change lexicon? It tumbles off their lips like a melt pond draining. "But overall, the Arctic sea-ice loss is over three times greater than Antarctic gain." We don't know why there is sea ice gain down south BUT we can discount our ignorance of such things by pointing to the other pole with sea ice loss. http://psc.apl.washington.edu/wordpress/wp-content/uploads/schweiger/ice_volume/SPIOMASIceVolumeAnomalyCurrentV2.1.png Well, everyone is looking at the wrong chart. Sea ice volume is the only measure that really matters. The other measures are a combination of luck and weather. The sea ice volume started to recover in 2007. The volcanoes in 2010 and 2011 hammered the sea ice with ash and caused major sea ice loss. Since the dirty ice melted the volume has been increasing steadily. This winter should get two standard deviations above the trend – breaking the trend. You may be right about looking at Arctic Sea Ice Volume instead of Extent. Do you have a suggestion as to why Arctic Sea Ice Volume may be recovering? I have been told, by very reliable IPCC consensus sources that Arctic Sea Ice is in a death spiral and not to recover until I end my sinful ways…SUV and all that. (SUV…new tires, new brakes, new windshield wipers at 125,000 miles, getting ready for this winter's snow and ice.) PA | October 5, 2014 at 12:31 pm | Ice is very sensitive to soot/ash. Ice has an albedo (reflection coefficient) of 0.9 which means it only absorbs 10% of incoming energy. Soot/Ash is around 0.1. So dusting ice with soot/ash is the same as making the sun 9 times brighter. Chinese soot, volcanic ash, soot from Canadian forest fires, and dust from where ever are all to blame. So in general – if the ice melts faster it is a combination of more dirt/higher temperatures. If it melts slower either some of the dirty ice has melted and the newer ice is cleaner or the temperatures are lower If the ice freezes more (increased volume) the temperatures are colder. The albedo has been increasing lately. The summer temperatures have been lower. For arctic temperatures see link below: I've wondered also if the record warm northern pacific and a potential el nino might interact with the weak polar vortex/wavey jet stream to recharge ice. ordvic | October 4, 2014 at 12:03 pm | I looked a little further into Milankovitch cycles and finally found what I was looking for, sort of. The Eemian interglacial lasted from about 130,000 bp to about 115, 000 bp or about 26,000 to about 28,000 yrs. I originally thought that the eccentricity must kick in to take temps down. Now I see the interglacial occurs entirely in the round orbit. The predominate factor driving temps up and down appears to be axil tilt as Milankovich surmised. The elipital orbit lasted from 115,000 to younger dyas or to about 20,000 to 15,000bp. Temps again rose into the Holocene Maximum starting about 11,500 bp. During the entire glacial period temps went up and down at a lower level as determined by axial tilt combined with precession. They say the average interglacial is about 12,000 years but that Eemian and the present one look to be about 28,000 yrs long. So the 28,000 seems too match up with axial tilt but not 12,000. If the average is true it must be how tilt and precession work together? Anyway, we are at peak now on the beginning of the down slope that would hit bottom about 17,000 years from now if it is like Eemian. So I would imagine it'll be at least 5000 before the colder times start to show up. That is unless of course CO2 somehow mitigates that into some kind of 100,000 yr goldilocks climate as some scientists suggest. http://www.gcrio.org/CONSEQUENCES/winter96/gifs/article1-fig3.gif http://phys.org/news/2013-01-deep-ice-cores-greenland-period.html Where did I go wrong? Jim D | October 4, 2014 at 12:25 pm | Currently the tilt and eccentricity favor northern ice because the northern summer is furthest from the sun, so we are well into the cold Milankovitch phase already. However, instead of increasing, the Arctic ice cover is now decreasing due to other bigger factors. How do you explain the Antarctic ice then? ordvic | October 4, 2014 at 1:15 pm | Wiki says we are at 0.017 eccentricty that is closer to 0.000055 low eccentricity than to 0.0679 high eccentricity. So wouldn't that indicate it is still fairly round? Also we are at 23.44 axil tilt that is half way between 22.1 and 24.5. I guess you are right as that would indicate we are half way from peak heat at 24.5. That would have to coorespond with a shorter interglacial period though since we've only been in it for 9000,00 to 10,000 years. If we are past peak it would mean this one will only go another 8,000 or 10,000 for a total of about 18,000 to 20,000. That is 6,000 to 10,000 short of Eemian although still nearly twice the average. DocMartyn | October 4, 2014 at 1:39 pm | supposition I get mixed up easily. The temps started to go up rapidly about 15,000 bp and peaked in the holocene maximum btw 8,000 to 4,000 bp. If that is correct then the cycle would complete in 7,000 to 11,000 yrs So that would mean from 8,000 bp peak we'd already be 1000 past a complete cycle and if it were 4,000 bp peak we'd still have 7,000 to go. ordvic, the roundness is a mitigating factor which may explain why we don't expect an Ice Age in this tilt cycle, but the cooling after the Holocene Optimum is consistent with the precession forcing. When I said "tilt" I meant precession, not angle. Phatboy, You have a good point because both the axil tilt and the apisidal precession (in combination with CO2) should have the Antarctic completely melted by now. Go figure! The Ice Ages don't spread from the south, and there is a good reason for that: no significant continents within range to glaciate. maksimovich | October 4, 2014 at 8:04 pm | The Ice Ages don't spread from the south Um yes they do eg Vandergoes. The evidence for early onset of maximum glaciation provides renewed support for a Southern Hemisphere 'lead' into the LGMand some indication of its cause. Strong cooling in the south commences during, or soon after, the phase when perihelion occurs during the Austral winter (30–35 kyr ago), which means that the local insolation budget was at its lowest level for the entire precessional cycle (Fig. 2). At the same time, insolation in the Northern Hemisphere was still in a positive phase, giving a local radiation budget higher than that at present. The northern 'driver' is therefore an unlikely trigger for the onset of maximum glaciation in the south and cannot have been directly responsible for a Southern Hemisphere lead. Jim D – Arctic ice has been in a pause of its own since about 2007. http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.anomaly.arctic.png phatboy "How do you explain the Antarctic ice then?" When it gets really cold water hardens. ordvic " I originally thought that the eccentricity must kick in to take temps down." My understanding of Milankovitch is a little different. It takes a Milankovitch maximum to take us out of an ice age – but the peak is short and thereafter things are metastable until the climate drops back to stable icy mode. I saw somewhere a statement we are about 2 W/m2 from glaciation. "However, there are two important sources of heat for surface heating which results in "basal sliding". One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation." "Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet "cold-pinned" to the rock underneath." "..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming…" "…Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation. Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation." Looks like Marshall and Clark http://scienceofdoom.com/2014/04/14/ghosts-of-climates-past-nineteen-ice-sheet-models-i/ What I think they're saying to some extent that it's a function of mass which is a function of time. Given enough cold time and transfer of liquid water from the oceans the to the ice sheets we will get a mechanical collapse. A slow motion avalanche to an interglacial. Are we collapsing the ice sheets now? If we are, there would be a reduced weight and insulation of ice which would reduce basal sliding and push in the direction of ice sheet stabilization. PA, Since the axil tilt is a 41,000 yr cycle that would leave only the apisidal precession of 21,000 years to coorespond with the short interglacial period. It seems to be how the combination of all three line up to make it happen? I'm just trying to figure out where they are now. I know the tilt and eccentricty approximately and supposedly the apisidal is north pole faces at furthest eliptical distance and south pole at closet right now. But this still doesn't tell me how far we are from peak on the downside. The apisidal is mainly what is throwing me off right now. It snows more when oceans are warm and then, after hundreds of years, it gets cold. It snows less when oceans are cold and then, after hundreds of years, it gets warm. Milankovitch cycles work with this sometimes and work against this sometimes, but Milankovitch cycles do not start or stop the snowfall. Warm oceans with no or low sea ice cause the snowfall. Cold frozen oceans stop the snowfall. Lucifer | October 4, 2014 at 3:41 pm | If more summer sunshine means less Arctic sea ice, then we can see that there will be a continued decline in Arctic sea ice for the next 100,000 years, regardless of how much more CO2 there would be: http://upload.wikimedia.org/wikipedia/commons/9/90/InsolationSummerSolstice65N.png Fernando Leanme | October 4, 2014 at 6:38 pm | Doesn't the reduced influx in the Southern Hemisphere compensate that effect? cwon14 | October 4, 2014 at 12:11 pm | http://m.canberratimes.com.au/act-news/liberals-outraged-that-kill-climate-deniers-play-is-funded-by-the-act-government-20141001-10ogo7.html#ixzz3EspGOkN5 The "Play"……."Kill Climate Deniers"….government funded, naturally. Bad Andrew | October 4, 2014 at 12:30 pm | Has The Global Warming Alarmist Marketing Campaign reached it's end date yet? They're still in hallways of the UN questioning free markets like Fanboy, it's no where near the end. jim2 | October 4, 2014 at 12:32 pm | From the oil boom article, linked in main post, from Vox: 4) The fourth, is the slight flagging of the natural gas boom. Careful market watchers expected 2012 to be a high point, when gas was ridiculously cheap and pushed out enormous amounts of coal. But coal clawed back a bit last year. That shouldn't have been a surprise — gas prices were unnaturally low in 2012 — and it's not an indication of long-term weakness in the idea that US gas production can keep growing. But it's a useful corrective to the idea that the gas boom would take care of country's climate problem all by itself. (end quote) I think this guy may be wrong about the price being "unnaturally" low, at least in the medium term. There are real worries now that the price will stay so low that companies will have to start shutting in higher operating cost nat gas wells. Pipelines don't yet exist to deliver all the gas being produced, so gas is being flared. Once those are in place, it will mean even more supply to market. This does not add up to a higher nat gas price scenario. AK | October 4, 2014 at 1:09 pm | This is where a lot of my commenting here has been pointing: http://3.bp.blogspot.com/-kipqa22-G6A/VDANRIBTY8I/AAAAAAAAAc8/LxPwVwwg0bo/s1600/Solar-yeast-growth-graph.jpg What is one of the big risks investors in gas infrastructure face? That exponential growth of solar. Sure, it might not happen, and even if it does, they'll get a decade or so of ROI. But how can their bets be hedged? If a working prototype bioconverter of hydrogen and CO2 to methane could be demonstrated, with good expectations that the cost could be brought down along with solar. In that case, solar energy (e.g. panels) would be competing with only the wells, instead of the whole industry. The trade-off for solar would be between solar (e.g. panels)+inverters+long distance transmission vs. solar(e.g. panels)+electrolytic hydrogen+bio-conversion to methane+long distance pipes(+gas-fired combined turbines). And, AFAIK, carrying equivalent amounts of methane thousands of kilometers (miles) is orders of magnitude cheaper than carrying electricity. Which would often make up for the lower efficiency of the electrolysis/bio-conversion steps. However, AK, there is no reason to believe this graph, which I would describe as preposterous. Capital intensive infrastructure penetration cannot happen this fast. http://ars.els-cdn.com/content/image/1-s2.0-S0360319914008489-gr2.jpg Capital intensive infrastructure penetration cannot happen this fast. UK Mobile Phone Subscriptions per 100 people; note that there are more subscriptions than inhabitants in the UK. This is because many people have more than one phone, or SIM card [13]. (Fig.2 from Mobile phone infrastructure development: Lessons for the development of a hydrogen infrastructure by Scott Hardman and Robert Steinberger-Wilckens International Journal of Hydrogen Energy Volume 39, Issue 16, 27 May 2014, Pages 8185–8193.) I suppose you think mobile phones are cute little toys you buy in a store? Don't forget the towers, data transmission, and software/protocols necessary for those phones to talk to one another, and the land-line system. Or the data infrastructure (hardware, software, and protocols) the Internet also depends on (I'm not going to provide links for that). From the article linked above: Previous studies use the example of how internal combustion engine (ICE) vehicle infrastructure was developed in the late 1800s and early 1900s [5] and [6]. […] One reason for the success of the ICE was due to there being an existing petroleum supply network. This network supplied petroleum for lighting and for stationary petrol generators, as well as the farming industry [5]. This meant that ICE outcompeted BEVs and steam engine vehicles precisely because infrastructure was already present. The availability of infrastructure was a compelling reason to purchase an ICE vehicle over competitive vehicles. The mobile phone was a disruptive innovation; this can be confirmed using the 3-point disruptive technology criteria. The criteria states that innovations are disruptive innovations if they require new infrastructure, are produced by new market entrants and not incumbents, and provide a greater level of service to the end users [7]. […W]ith economies of scale and technological improvements handset unit costs were continually reduced and in around 30 years the mobile phone went from high cost low volume series in niche markets to occupying the whole landscape and achieving an enormous mass-market share (see section 1.2). [my bold] Mobile phone use would not be possible without the development of infrastructure. Consumers would not purchase a device that could not be used. As with FCVs there was a need to make a decision to invest in infrastructure before the market entry of the product could begin. The decision to invest is not an easy one, as the economic incentives to develop an infrastructure that currently has no customers are hard to identify. Nevertheless, without the development of infrastructure any technology reliant upon it will surely fail. Mobile phone infrastructure has been continually developed over the past 4 decades. An overview of the increase in network capabilities can be seen in Fig. 5 as measured by download rates, also know as band rates. A parallel logic can be applied to solar power. Both the options I mentioned above depend on mature infrastructure: electrical grid, and gas storage/distribution/use. Solar power, like Cell towers and data transmission infrastructure, would have to be built, bought, and installed. But the economics around such infrastructure change are no more "preposterous" than those of the Internet, or cell phone infrastructure. "Capital intensive infrastructure penetration" DID happen that fast. TWICE! Perhaps this relevant. Photovoltaic power doubling every 18 months. http://www.motherearthnews.com/~/media/Images/MEN/Editorial/Articles/Online%20Articles/2013/08-01/World%20Solar%20Power%20Topped%20100000%20Megawatts%20in%202012/world%20solar%20power%20graph%201%20png.PNG AK, I used the term capital intensive infrastructure precisely to distinguish solar power installations from small consumer items like cell phones. Buying a cell phone and putting a $30,000 solar system on your house are very different sorts of investments. Most people can afford the former while few can afford the latter. I think solar will be a niche technology for a long time, unless its use is mandated of course. Buying a cell phone and putting a $30,000 solar system on your house are very different sorts of investments. But what about the infrastructure? http://upload.wikimedia.org/wikipedia/commons/2/27/Telstra_Mobile_Phone_Tower.jpg AK if you plot the number of Ebola cases in the USA, extend the curve, you will note we will all be dead before the end of the year. @DocMartyn… Really small sample there: One case. (Not counting the ones who already had it and knew it before they returned from Africa.) OTOH we have decades of experience with the exponential growth of solar PV. Absent massive storage capability, solar (and wind) cannot replace fossil capacity in large scale power systems, due to intermittency. What they can do is reduce fossil fuel use but that makes the system as a whole more expensive because the fossil plants earn less. It is very expensive to have a fossil plant sitting there just to run in the dark or when the wind does not blow. Governments do not seem to know this. http://freeradicalnetwork.com/wp-content/uploads/2014/09/Ostrich-man-head-in-sand.gif http://i.dailymail.co.uk/i/pix/2010/04/07/article-1264092-081D0A9F000005DC-144_468x339.jpg The big risk everybody faces is all the solar panel owners expecting a subsidy because they can't generate electricity on a steady basis. They could use it to buy battery backup. rls | October 4, 2014 at 10:39 pm | AK: Your faith in the exponential advancement of technology is shared by me. However, unlike the mobile phone industry at its beginning, the power industry infrastructure is established and will change only over time, perhaps generations of time, as demand increases and existing facilities become uneconomical. […] unlike the mobile phone industry at its beginning, the power industry infrastructure is established and will change only over time, […] I suppose that's what most people thought about the land-line phone system, too. I have an idea. Instead of pylons, let's suspend the electric grid main lines using drones. Attached to the lines, they could be powered by corona discharge through a small spike. That way, we could direct power wherever it is needed! Better yet, scrap the big long-distance grid entirely, and replace it with cheap gas-fired generators feeding local micro-grids. rls | October 5, 2014 at 12:13 am | AK: Mobile phone got it start, not as a replacement for land lines, but as an extra source of communications. Also I'm uncertain regarding the envisioned future of solar. Is it going to be a consumer product or an industrial product? My original comment assumed that the customers would be the power companies. jim2 | October 5, 2014 at 8:54 am | The drone idea will help you get power from the Patagonia Desert to NYC. AK | October 5, 2014 at 10:03 am | Also I'm uncertain regarding the envisioned future of solar. Is it going to be a consumer product or an industrial product? I've actually been spending a good deal of my free time trying to understand the (potential) future economics of solar power, and some of the arguments I get here help to stimulate my thinking (much as it may sometimes appear otherwise). There are several important points that must be kept in mind: • The energy situation is very different in different places: you can't assume that the US, or Western Europe, works as a model for, e.g., India, Central Asia, China, or, especially, Africa. • There's a vast array of potential technologies, even looked at carefully and critically. For instance, I'm totally skeptical of anything having to do with hydrogen for vehicles, or for major energy storage and/or transport. But, after taking a similarly critical look at, e.g. panels and concentrating PV, I see many potential problems, but none that don't look solvable. • There are huge potential synergies that don't seem to have been looked at. Desalination and pumping come to mind immediately. These could probably be made cost-effective near-term, without the need for (energy) storage, inverters, or distance transmission. Maybe not everywhere, but many places. • There's a variety of technologies "in the pipeline" that could (IMO will) serve as "enabling technology" for innovative solar development. I would include: • cheap, mass-producible inflated structures, • "static robots" where robotic technology (sensing and control) is used to stabilized a structure in a single shape against distorting forces (e.g. wind pressure), • floating buildings and other structures, • cheap sunlight collection via "light pipes", • and cheap, mass-produced tracking mechanisms, suitable for concentrating PV and even enhancing the value of panels. • "Moore's Law". The exponential decrease in cost/price of information technology (IT) will serve as enabling technology for the items listed above, as well as others not thought of yet. So which technologies will grow rapidly for which markets? I can only guess, as do the industry "forecasters". Africa will probably see a much larger focus on local, small-scale development than the US/Europe. India and China may use some mix of small-scale and large (i.e. more like the US). But that's only guessing. In projecting, I start with the assumption that people like Ray Kurzweil are right about the exponential growth, although I predict that any specific technology will follow a more "yeast growth" pattern. (Thus the picture above, where I overlaid an actual yeast-growth curve over Kurzweil's exponential curve.) There's little or no difference in the curves during the early growth stage, but as the technology matures the growth tapers off. The question is: what technologies will contribute to that growth, and how? To grow as the curve predicts in developed areas, solar will have to be connected to the grid. According to the standard paradigm, this will require inverters (to convert DC from the PV cells to AC suitable for long-distance transmission), and transmission from mostly remote sites to appropriate grid connections. The alternate paradigm I'm pushing would involve converting it to methane (or oil) right at the collector, and using equally mature gas technology for storage, transport, and generation. Yes, there would be some efficiency losses, but the cost of transporting gas is (AFAIK) orders of magnitude cheaper than electricity, and all the technology could be made small-scale: hydrolysis and bio-conversion for a small collector area could probably be fit into a Coke bottle, only a few feet from the actual cells. Economies of scale could be achieved by simply making millions, or billions, of such units. Improvements in sensing and communications technology, following "Moore's Law", would mean that each unit could keep track of itself, and defective units could be replaced by automated systems, perhaps supervised remotely by engineers. The exact same technology could be used on a more "one-off" basis for distributed power in, e.g. African villages. Small scale gas compression, storage, and perhaps even distribution (to homes) could allow the result to both power generators and replace wood or dung for cooking and heating. kim | October 5, 2014 at 10:10 am | I propose something as magical as Chlorophyll. The problem with chlorophyll is the very large number of life-forms adapted to eating it, and the life-forms that deploy it. Not to rule out Joule, Unlimited's efforts… AK: Thank you. You have obviosly given this much thought and work. Are you familiar with the work of Vaclav Smil? He writes that "Perhaps the most misunderstood aspect of energy transitions is their speed. Substituting one form of energy for another takes a long time.". He believes it will take generations to transition from our existing fossil fuel infrastructure to a renewable energy infrastructure. I tend to agree with him and don't see the growth of computers/mobile phones as comparable to the power industry. Those were new markets and did not involve abandoning a huge existing infrastructure. A few years ago I invested in a 95% efficient gas furnace and it is still going strong and saving me money. It will be many years before I buy a renewable energy device for my house. My electric company has up-to-date gas turbines and a nuclear facility and, I believe, more nuclear in the planning stages; it would be foolish for them and costly to me to abandon those facilities and switch to renewable, especially faced with increased energy demand. However, it would be acceptable to meet increased demand with reasonably priced renewables; in that case it will take the generations that Smil writes about. @rls… This is exactly why I'm pushing the electrolysis/bio-conversion to methane approach. You keep your gas furnace, your power company keeps its gas generators, but all that cheap solar is used to generate gas to put into that system, in place of what comes from wells. Individual wells, AFAIK, tend to last less time than all the infrastructure for storing and transporting gas. For that matter, they could put solar power/gas installations on the land used for wells, and feed the resulting gas into the same pipes used for the wells. I know I need pictures. I'm working on it. In my free time. rls | October 5, 2014 at 1:53 pm | AK: Got it, please forgive my neurons, not my fault, I wasn't informed. Have you seen this: Washington D.C. — The Department of Energy has issued a draft solicitation that would provide up to $12.6 billion in loan guarantees for Advanced Nuclear Energy Projects. http://www.energy.gov/articles/department-energy-issues-draft-loan-guarantee-solicitation-advanced-nuclear-energy-projects Thanks. Seen it now. Like I said, smart money won't invest in nuclear without guarantees. Still, it's probably worth it for strategic reasons, as well as a fallback if solar doesn't keep up its gallop towards the price floor. Answering Vaclav Smil Watts Up, Vaclav? Putting Peak Oil and the Renewables Transition in Context by Chris Nelder June 5, 2013 I'd like to pick out some blockquotes, but I'm going to be away from my computer for most of the rest of the day. Have other things to finish. AK, you might have noticed that I am very sceptical about any attempts to predict the future. In this context, considering potential technological advances and pricing, I recall a 1985 assessment at Australia's Bureau of Industrial Economics. In 1975, BIE (or its parent department) forecast which ten industries would grow fastest in the next decade. In the event, none of the industries which grew fastest from 1975-85 (all in microelectronics) were on the list, as they did not exist in 1975. I say again, I say again, we must pursue policies which enhance our capacity to deal best with changing circumstances, whatever they may be, rather than putting unwarranted faith in projected and possible futures, particularly those predicated on technological change (or, of course, imperfect modelling of possible temperature rises). Peter Lang has asked me (below) to look at some work on discount rates, if I manage to comment, it might be relevant to this sub-thread. A higher natural gas price scenario is reasonable. The number of rigs drilling for gas us down because the price is too low. Companies drill for condensate, and light oil with associated gas but the stay away from dry gas. Eventually the reduction in the number of wells being drilled is reflected in a lower gas production capacity. The lower capacity leads to price increases. It's a cyclic phenomenon. Eventually the industry shakes down, the weaker companies are bought at distress prices and the competition streamlines the number of players. New players appear to operate as cheaply as possible, and the cycle moves on. But the prices continue to rise. Also, the Obama administration is allowing some gas exports. This in turn should increase prices. I don't think the USA has enough gas to make an impact supplying the vehicle fleet. "Higher" is a relative term: http://chart.finance.yahoo.com/z?s=UNG&t=5y&q=l&l=off&z=l&a=v&p=s&lang=en-US&region=US What's relative is the sense of tine spans. Drilling for gas is uneconomic at this time. One reason is the over investment by mullets buying gas funds. Over the next 30 years the prices will rise. Natural gas won't last forever, that's for sure. But there's more there than anyone knows, I'm betting. Looks like industry agrees with you, Fernando. White said he supports the Keystone XL, but also noted that despite the political hurdles facing that project, the American pipeline sector overall is booming. "We've seen more pipeline construction in the U.S. over the last seven years than in the history of the history of the United States," White said. "I'm not saying this to minimize the significance of Keystone or anything," White said, "but by historical standards, we're moving pretty fast on midstream." Pickering said he doesn't expect the boom in production from shale to slow in the near future. "The next big thing is the current big thing," Pickering said. "We're 10 years into the shale story… and it's probably a 20-year or 30-year thing." http://fuelfix.com/blog/2014/09/23/panel-dont-expect-high-natural-gas-prices-any-time-soon/ Don't forget sea-floor methane hydrates. The robotics needed to operate on the ocean floor aresubject to "Moore's aLaw". And if there aren't any people present, the very high pressures aren't really a problem. Fernando Leanme | October 5, 2014 at 3:08 am | I think many of you who dismiss the difficulties getting the unconventional hydrocarbons don't grasp the details. Working offshore in deep water isn't subject to Moores Law. Go read about Petrobras and their project to produce the presalt fields. And tell me, do you think methane hydrates are found in a nice little pile on the sea floor? Do you visualize something we can suck up with a vacuum cleaner? Alexej Buergin | October 4, 2014 at 12:38 pm | "China's one-word anster to Obama" reminds me of this: A man orders "flied lice" from a Chinese waiter. The six-word anster: "It is fried rice, you plick." Funny. I actually go around to reading that article because of this joke. But more seriously, I suspect the key is: –Western countries also need to remove "obstacles such as IPRs [intellectual property rights]" to "promote, facilitate and finance the transfer" of "technologies and know-how" to developing countries in advance of any future climate deal; They'll drop all the other demands to get free access to the patented technology. Well, until the CAGW people can show significant warming in the raw data for this century, it is just a game and China wants to win as much as it can. Without actual (raw data) warming, unless someone pays you to clean up there is no incentive. The pause is predicted to go on until 2030 so we have some time to kill. If the pause goes to 2030 CAGW has a hard time justifying any action. China will be at peak coal and there won't be a another big player to keep emissions rising. It is hard to defend predictions of more than a 1°C temperature rise if temperatures are flat for a third of a century. A 1°C temperature rise by 2100 doesn't justify taking any action. From The Carbon Brief article on the Antarctic: But at the North Pole the decline of Arctic sea-ice continues to accelerate. Scientists haven't yet been able to pin down why the opposite is happening in the Antarctic. Are these guys looking at the same charts as the rest of us? Danley Wolfe | October 4, 2014 at 12:45 pm | Pierrehumbert has been a lead author on the IPCC Assessment Reports and was a co-author of the National Research Council report on abrupt climate change. His field of specialization is developing idealized mathematical models to solve problems in climate science. He is also a frequent contributor to RealClimate. Pierrehumbert is the last person in the world and one of the people with the most to lose by giving a fair and balanced perspective on climate science. Any more questions? Diag | October 4, 2014 at 1:37 pm | Another consensus bites the dust: http://abcnews.go.com/Health/story?id=117310 "Was Ebola Behind the Black Death?" Controversial new research suggests that contrary to the history books, the "Black Death" that devastated medieval Europe was not the bubonic plague, but rather an Ebola-like virus. The details in the article are quite convincing. Maybe in a thousand years someone will write an article like this about climate science. "If you look at the way it spreads, it was spreading at a rate of around 30 miles in two to three days," says Duncan. "Bubonic plague moves at a pace of around 100 yards a year." Pneumonic plague… [I]s more virulent and rare than bubonic plague. The difference between the versions of plague is simply the location of the infection in the body; the bubonic plague is an infection of the lymphatic system, the pneumonic plague is an infection of the respiratory system, and the septicaemic plague is an infection in the blood stream. Typically, pneumonic form is due to a spread from infection of an initial bubonic form. Primary pneumonic plague results from inhalation of fine infective droplets and can be transmitted from human to human without involvement of fleas or animals. Untreated pneumonic plague has a very high fatality rate. The genome of Yersinia pestis, the bacterium that causes bubonic plague, recovered from human remains at East Smithfield. http://www.nature.com/news/2011/111025/full/478444a.html Yeah, well, I was referring to the "details in the article are quite convincing." Anybody with any knowledge of "Yersinia pestis" knows about pneumonic version. Thus, the article was just the opposite of "quite convincing." Is my son who is training as a thoracic physician in a growth industry? Thanks for the info. After reading the Wikipedia and CDC pages the article doesn't look convincing at all. Don B | October 4, 2014 at 2:03 pm | California drought: "To test their theory, the Stanford team applied advanced statistical techniques to a large suite of climate model simulations." In 1994, when the NY Times was not the climate campaigner it is today, it noted that droughts in the California region were much, much worse in the past. "BEGINNING about 1,100 years ago, what is now California baked in two droughts, the first lasting 220 years and the second 140 years. Each was much more intense than the mere six-year dry spells that afflict modern California from time to time, new studies of past climates show. The findings suggest, in fact, that relatively wet periods like the 20th century have been the exception rather than the rule in California for at least the last 3,500 years, and that mega-droughts are likely to recur." http://www.nytimes.com/1994/07/19/science/severe-ancient-droughts-a-warning-to-california.html Data trumps models. As I've said before, they don't build huge dams in areas where water is reliably plentiful. Sure they do, for hydro power and to a lesser extent flood control. They do not build them for water supply. Well, Ok, but I don't believe that either of those were the main considerations for building dams in places like California sunshinehours1 | October 4, 2014 at 2:32 pm | Almost as dry as 1923. http://sunshinehours.wordpress.com/2014/10/04/california-drought-almost-as-dry-as-1923-and-1976/ Explaining Extreme Events of 2013 from a Climate Perspective American Meteorological Society goes all-in for James Hansen's climate-change worldview! AMS Summary and Broader Context This report contributes to the growing body of evidence that human influences on climate have changed the risk of some extreme events and that scientists are increasingly able to detect these changes. effects of human-induced climate change, as found for the Korean heat wave of summer 2013. These individual examples are consistent with the broader trends captured in the latest IPCC (Stocker et al. 2014) statement, "it is likely that the frequency of heat waves has increased in large parts of Europe, Asia and Australia." Beyond the science, there is an ongoing public dialog around climate change and its impacts. It is clear that extreme events capture the public's attention. And, indeed, they should because "people, plants and animals tend to be more impacted by changes in extremes compared to changes in average climate" The World Wonders What will Rome say? Because isn't it the poor and disenfranchised who suffer most, from heat and drought and rising seas? As for denialism's "usual suspects" Don't we *ALREADY* know what their frothing ideology-driven response will be? "The freedom-destroying commie/green/liberal climate-science conspiracy extends even farther than we ever dreamed!" cwon14 | October 4, 2014 at 3:04 pm | More bafoonery Fanboy, the "conspiracy" straw man has been shot down a thousand times before. Left-wing group think such as AGW is conspiracy idealization by definition; "big oil" etc. etc. It is you ranting conspiracy theory with the largest tinfoil hat in history. cwon14, your comment was inspected for scientific information, historical precedents, and rational discourse. Result none were found. He didn't even spell "buffoon" right. Market fundamentalism! Is there any problem it can't solve? Canman | October 4, 2014 at 4:27 pm | FOMD: The link is to an Onion parody about a proposed program to build a National Air Conditioner. How does that have anything to do with market fundamentalism, much less markets? That was funny. Caustic but funny Humans do cause heat waves in cities. They buy air conditioners and pump very humid hot air out of their homes and apartments into the city causing the temperature to rise even higher and then they need more air conditioning. It has nothing to do with CO2. "Studying the impact of air conditioning on the heat island of Paris, another team led by Cecile de Munck and French colleagues observed that air conditioning increases energy demand and the cooling systems themselves release heat onto city streets. "It's a vicious circle," said de Munck, "temperature increase due to air condition will lead to an increasing air cooling demand."" Skiphil | October 4, 2014 at 3:21 pm | Fanboy, no one of the slightest scientific orientation cares about "What will Rome say?" Why do you insist upon spouting irrelevancies that only annoy people? If you ever get out of mama's basement you will be surprised at how interesting the Real World turns out to be. Skiphil asserts [utterly wrongly and without reason or evidence] "Fanboy, no one of the slightest scientific orientation cares about "What will Rome say?" Ignorance by skiphil, evidence by FOMD! Fan of more discus I was wondering if the paper on tornado frequency I coauthored with Dr Abruzzo was discussed at your committee meeting? The prepublication draft is at http://21stcenturysocialcritic.blogspot.com.es/2014/09/a-new-parameter-to-predict-tornado.html It fits the context of anti climate denial required by the committee. It predicts increases in tornado frequencies when global warming resumes. Consider that we are not even aware of all the natural phenomena that take place around us — as we look back in time at past weather to tease out future trends — all of which are involved in climate change that we only understand, after-the-fact. vukcevic | October 4, 2014 at 3:03 pm | I have looked at ENSO from various angles, and find classic view not entirely satisfactory. Mr. Bob Tisdale an enthusiastic researcher of ENSO suggests that start of Kelvin wave, i.e. donwelling at the east longitude is underway. I propose: Atmospheric pressure at Port Moresby 142 East should be able to tell us something about the ENSO. – change in the atmospheric pressure is caused by downwelling or – downwelling is initiated by the atmospheric pressure Waveforms of two are similar but not identical Spectral composition is almost identical for periods up to 8 years or so, then two diverge. What follows is plain and simple: Port Moresby atmospheric pressure has two dominant components: Sunspot cycle at 11 years Lunisolar tides period 18.6 years http://www.vukcevic.talktalk.net/ENSO-PMap.gif I suggest that the way the ENSO index is calculated inadvertently conceals the true cause of the ENSO. Of course, I do not expect many or even anyone to agree. http://science.nasa.gov/media/medialibrary/2006/05/10/10may_longrange_resources/predictions3_strip.jpg?w=600 I suggested couple of months ago that using atmospheric dipoles may not be the best method to calculate two important climate indices, NAO and ENSO. http://judithcurry.com/2014/07/26/open-thread-19/#comment-611992 Global Warming (definition): Threatens even when it seems to yield. John Smith (it's my real name) | October 4, 2014 at 3:26 pm | NYT on Australian 2013 heat "record" high "when we look at the heat of 2013 across the whole of Australia … virtually impossible without climate change" ugh … no kidding I, for one, could become less skeptical if they would come up with better language (I know what they mean, it just sounds so dumb) "climate change" the perpetual motion machine of propaganda labeling eternal … never to be resolved record heat in Australia but new high in Antarctic sea ice? Australian records are relatively short and only those from Stevenson screen days are accepted, this is very roughly from 1900 or so, depending on the stations. There are interesting records from Watkins who describe many of the 'unprecdented' things happening today including birds falling out of the sky due to the Heat Watkins predate stevenson screens by well over a hundred years. A good flavour of the often savage climate of Australia can be seen in the poems of Dorothea mackellar , particularly this one http://www.dorotheamackellar.com.au/archive/mycountry.htm The first part of the poem is, I believe, a reference to her origins in the UK. Tony, that is probably the best known and most often cited poem in Australia. I loved the Australian landscape and light when I accidentally emigrated in 1979, a friend from Essex who'd emigrated three years earlier still found it alien and unsettling. Deservedly so, it's a very evocative poem which illustrates that the harshness of the climate is not restricted to this year. Are the Watkins diaries much publicised over there? Hopefully John will find the references interesting. the poem kinda made my day wish I could buy you an espresso or a pint espresso for me, I don't drink (former professional – had to retire) One thing that bugs me about AGW folk is the "global citizen" one world government stuff as you know, the passage from feudalism to nation states was a bloody go these "one world" folk are misguided and ignorant of history (and some I fear might be hiding their true motives) the poem just reminded me of love of country, something I of late have gained new appreciation Yeah, dubious of the "record heat" stuff you are a gentleman and a scholar PS … one thing I haven't yet been lucky enough to see is the Southern Cross Tony, I've never heard of the Watkins diaries, so either no or not to my demographic, which seems to prefer CE. I remember seeing the Watkin's Diaries at Readings' Book Shop a while back bts. mosomoso | October 4, 2014 at 8:55 pm | Dorthea's piece was read by every Aussie kid in the 50s. Many years later, living on the land and waiting for spring rains, I find myself aching for that "drumming of an army" after weeks of "pitiless blue sky". More essential reading on the nature of Oz is this little short story by Henry Lawson; http://www.readbookonline.net/readOnLine/12038/ tonyb | October 5, 2014 at 3:41 am | Beth, Faustino and Mosomoso Here are two short extracts; http://ebooks.adelaide.edu.au/t/tench/watkin/ Watkin Tench The Settlement at Port Jackson (Part of) Chapter 17 "The difference can be accounted for only by supposing that the woods stop the warm vapours of the sea from reaching Rose Hill, which is at the distance of sixteen miles inland; whereas Sydney is but four.* Again, the heats of summer are more violent at the former place than at the latter, and the variations incomparably quicker. The thermometer has been known to alter at Rose Hill, in the course of nine hours, more than 50 degrees; standing a little before sunrise at 50 degrees, and between one and two at more than 100 degrees. To convey an idea of the climate in summer, I shall transcribe from my meteorological journal, accounts of two particular days which were the hottest we ever suffered under at Sydney." "But even this heat was judged to be far exceeded in the latter end of the following February, when the north-west wind again set in, and blew with great violence for three days. At Sydney, it fell short by one degree of what I have just recorded: but at Rose Hill, it was allowed, by every person, to surpass all that they had before felt, either there or in any other part of the world. Unluckily they had no thermometer to ascertain its precise height. It must, however, have been intense, from the effects it produced. An immense flight of bats driven before the wind, covered all the trees around the settlement, whence they every moment dropped dead or in a dying state, unable longer to endure the burning state of the atmosphere. Nor did the 'perroquettes', though tropical birds, bear it better. The ground was strewn with them in the same condition as the bats." beththeserf | October 5, 2014 at 4:02 am | Thx fer the Oz licherachure, Tony and Moso. I'll add the Watkins ter me reading list.The Lawson, lol, moso, so laconic. Rob Ellison | October 5, 2014 at 4:55 am | Contrast the early painter John Glover and the impressionist Arthur Streeton. It took a while before Europeans were capable of even seeing the landscape. Mackeller was a similar vintage to Streeton and was at the transition between harking back to England and an emergent nationalism. http://en.wikipedia.org/wiki/John_Glover_(artist)#mediaviewer/File:John_Glover_-_The_bath_of_Diana,_Van_Diemen%27s_Land_-_Google_Art_Project.jpg http://en.wikipedia.org/wiki/Arthur_Streeton#mediaviewer/File:Arthur_Streeton_-_Golden_summer,_Eaglemont_-_Google_Art_Project.jpg But of course the quintessential Australian poem of extremes is – http://en.wikipedia.org/wiki/Said_Hanrahan#The_Poem 'I'm longin' to let loose on somethin' new. Aw I'm a chump! i know it; but this blind ole springtime craze Fair outs me on these dilly silly days.' C J Dennis down under. http://www.middlemiss.org/lit/authors/denniscj/sbloke/spring.html Steyn is good! Here's his tweet to Greg Laden (in reference to Laden starting a silly pi$$ing contest with our hostess) Maybe she's hiding the decline just to torment you? State of Energy: Enough Gas for 100 Years There is enough energy in the ground right now to supply the needs of the U.S. for the next 100 years, and we can get to it economically. http://news92fm.com/483818/state-of-energy-enough-gas-for-100-years-hounews I have been looking at Exxon Mobil's performance, and noticed they are purchasing large amounts of their own shares. Their oil and gas production trend down, but look fine on a per share basis. My guess is their production will rebound over the next few years but the medium term looks grim. EM looks a bit more difficult to study than say Chevron Texaco. CT has been losing oil production and they don't look likely to make a turnaround in this trend over the next 5 years. CT seems to be turning into a little oil big gas company. have you noticed much movement in gas to liquids? Not having much in the way of diesel must be holding the US back. http://breakingenergy.com/2013/10/09/gas-to-liquids-primus-pursues-cheaper-drop-in-fuels/ http://www.foxbusiness.com/industries/2014/07/10/shell-leaves-its-peers-behind-on-big-gas-to-liquids-plants-7200438/ The Israelis will be completely energy independent in five years. Doc Martyn, I worked on gas to liquids in the 1990's. It requires a cheap gas price to be feasible. That pathway meets a barrier when gas can be marketed as LNG to the Far East and Europe. Thus what we see are increasing LNG trade flows and very little GTL. Also, do this exercise: find the gas reserves for a large player (use Russia), convert them to liquids using 30 % of the gas to fuel the process. Then compare those reserves to the Saudi Oil reserves. What you'll find us that gas isn't that plentiful if it's used to replace oil. Question: do you guys want me to show you the figures? Of course, show us the figures. I have never understood why there is no coal + natural gas => liquids route. 2 CH4 + C -> CH3CH2CH3 CH3CH2CH3 + CH4 + C ->CH3(CH2)3CH3 Probably pretty wasteful (of energy) without targeted catalysis. Of course, with the right enzymes, it might be feasible to drive it with partial pressure. Fernando Leanme | October 5, 2014 at 11:08 am | Doc, the gas to liquids processes tend to focus on FT technology. When I looked into gas conversion I became convinced it was much more practical to convert the methane to dimethyl ether (DME) and try to use it as a diesel substitute. A DME conversion unit makes methanol as a side stream, and its a lot cheaper. I also tried to sell the use of ethane as feedstock, but I could never get anybody interested because there wasn´t that much ethane in the world. If the USA ethane surplus were to last it would make an excellent candidate to be made into DME (ethane can be fed into a much cheaper reformer). But I don´t think that would allow the development of an infrastructure. We seem to run into the same issues all the time. DME is horrid; it has highly flammable creeping vapours and reacts with water and oxygen to make shock detonation peroxides. Fernando. I second what Doc said about DME. DME is a gas, not a liquid. Maybe you meant diETHYL ether. Even that is highly volatile and would not, even physical property-wise, work as a diesel substitute. It is the culprit in many a lab explosion and fire. It produces peroxides spontaneously within its bulk if not inhibited. Needless to say, this is not a good thing. Fernando Leanme. | October 5, 2014 at 3:32 pm | Doc. Russia has the world's largest gas reserves. If those reserves are ALL converted to syncrude they would be equivalent to 192 billion barrels. Now take the USA gas reserves, set them at 25 % of Russias…that's 48 billion barrels. Assume USA liquids consumption rate is reduced to 12 million barrels per day, that's 4.38 billion barrels per year. That's 11 years' worth of supply. So even in the USA with the infrastructure and relatively low prices it seems like a poor bet at this time. Many years ago I lived in Russia and was trying to investigate how to move those giant gas reserves to liquids. When I put the pencil to it I learned that gas was useful as a supplemental source of liquids. But it wasn't a game changer. Eventually we realized the Europeans paid a very nice price for the gas, so we gave up the russian gtl idea. Did you notice the way Qatar markets their gas? LNG. I hear their GTL project with Shell isn't working out very well. Maybe some day we will have a better technology? I predict a barrel of laughs from this one. Scientists are to challenge the climate-change sceptics by vastly improving the speed with which they can prove links between a heatwave or other extreme weather event and man-made changes to the atmosphere. It typically takes about a year to determine whether human-induced global warming played a role in a drought, storm, torrential downpour or heatwave – and how big a role it played. This allows climate sceptics to dismiss any given extreme event as part of the "natural weather variation" in the immediate aftermath, while campaigners automatically blame it on global warming. By the time the truth comes out most people have lost interest in the event, the Oxford University scientists involved in the project say. They are developing a new scientific model that will shrink to as little as three days the time it takes to establish or rule out a link to climate change, in large part by using highly accurate estimates of sea surface temperatures rather than waiting for the actual readings to be published – a process that can often take months. http://www.independent.co.uk/environment/climate-change/scientists-to-fasttrack-evidence-linking-global-warming-to-wild-weather-9773767.html Yet people will believe this Cullen nonsense. Presumeably they are defining normal using available data then measuring the statistical distance to the extreme as a probability or some such. Reminds me of the garbage of determining the 100 year flood using 100 years of data. Being able to crank out BS faster isn't necessarily a benefit. Have them cry wolf on a daily basis doesn't change the fact they are crying wolf. DocMartyn finds his happy-place "There is enough gas-energy in the ground right now to supply the needs of the U.S. for the next 100 years" Market fundamentalists appreciate what this means, DocMartyn: NO SIZE RESTRICTIONS AND SK*RW THE LIMITS! http://totallytop10.com/wp-content/uploads/2010/08/zombieland_photo_08-535×3551-300×198.jpg http://gilkalai.files.wordpress.com/2012/11/johns.jpg John Vonderlin | October 4, 2014 at 5:28 pm | DocMartyn, At first I thought you had grabbed one of my Morning After photos off of Facebook and then I realized my hairline recession hasn't reached that point yet. At least now I can understand the spread of coulrophobia in modern society. A Wikipedia entry has this about the etymology of this mental illness's popular name: The prefix coulro- may be a neologism derived from the Ancient Greek word κωλοβαθριστής (kōlobathristēs) meaning "stilt-walker."[nb 1] Although the concept of a clown as a figure of fun was unknown in classical Greek culture,[4] stiltwalking was practiced. The Online Etymology Dictionary states that the term "looks suspiciously like the sort of thing idle pseudo-intellectuals invent on the Internet and which every smarty-pants takes up thereafter". Frowny Face Frowny face Frowny face Our name is legion! That today's Conservatives can no longer support the rights their predecessors saw as conservative safeguards is a mark of their extremism. If Cameron wins the next election, that extremism will drive a majority Tory government. His supporters will not allow him to play the PR man once again and dress up our existing rights in new clothes. They will force him to abolish or restrict them. No doubt they will scream with pain when the state threatens them. But to repeat the old gag that a conservative is just a liberal who hasn't been arrested and predict that they will change their minds is to miss the source of rightwing anger. In a celebrated speech in 2009, the late and much missed Lord Bingham listed the liberties the European convention protects. The right not to be tortured or enslaved. The right to liberty and security of the person. The right to marry. The right to a fair trial. Freedom of thought, conscience and religion. Freedom of expression. Freedom of assembly and association. "Which of these rights, I ask, would we wish to discard? Are any of them trivial, superfluous, unnecessary? Are any them un-British?" http://www.theguardian.com/commentisfree/2014/oct/04/tory-wreckers-out-destroy-human-rights The EUs famous respect for human rights and freedom of expression is very morally flexible. I'm not impressed by any political side of the coin. The left has a very well established reputation for being incredibly repressive when they take power. The right is pretty much the same. I'd say the best bet is to be libertarian if you worry about human rights. Rob Ellison | October 4, 2014 at 5:15 pm | I prefer the term classic liberalism – and in Australia freedoms – long fought for – are best defended in the common law inherited from England. There was an exceptionally interesting programme on the BBC tonight about Angkor wat and how they managed their water to support the largest city in the world in the 13th century http://www.youngzine.org/article/lost-city-khmer-empire The city was eventually overwhelmed by climate change as a severe decades long drought was followed by an equally long period of exceptional rain which caused the water management systems to collapse Rob Ellison Australia is a different realm. Australians' position on the planet's underside leads to a troublesome posture, which makes that branch of humanity have loose connections with the average political structure. You have been reading the map upside down. https://watertechbyrie.files.wordpress.com/2014/06/article2-world.gif go help laden. he's embarassing Biodiversity is declining in both temperate and tropical regions, but the decline is greater in the tropics. The tropical LPI shows a 56 per cent reduction in 3,811 populations of 1,638 species from 1970 to 2010. The 6,569 populations of 1,606 species in the temperate LPI declined by 36 per cent over the same period. Latin America shows the most dramatic decline – a fall of 83 per cent. Habitat loss and degradation, and exploitation through hunting and fishing, are the primary causes of decline. Climate change is the next most common primary threat, and is likely to put more pressure on populations in the future. Terrestrial species declined by 39 per cent between 1970 and 2010, a trend that shows no sign of slowing down. The loss of habitat to make way for human land use – particularly for agriculture, urban development and energy production – continues to be a major threat, compounded by hunting. The LPI for freshwater species shows an average decline of 76 per cent. The main threats to freshwater species are habitat loss and fragmentation, pollution and invasive species. Changes to water levels and freshwater system connectivity – for example through irrigation and hydropower dams – have a major impact on freshwater habitats. Marine species declined 39 per cent between 1970 and 2010. The period from 1970 through to the mid-1980s experienced the steepest decline, after which there was some stability, before another recent period of decline. The steepest declines can be seen in the tropics and the Southern Ocean – species in decline include marine turtles, many sharks, and large migratory seabirds like the wandering albatross. http://wwf.panda.org/about_our_earth/all_publications/living_planet_report/living_planet_index2/ The contribution from anthropogenic climate change is certainly over estimated. http://wwf.panda.org/_core/general.cfc?method=getOriginalImage&uImgID=%26%2AR%5C%2C%20%3E_41 I would blame fishing fleets. And natives wearing shotguns. curryja | October 4, 2014 at 5:33 pm | Now this is interesting. I have been wondering how to reach the younger generation with regards to the climate debate. Lets face it: the prime demographic at Climate Etc. is over 55, white and male. Of my 3000 twitter followers, there is a growing number of young people. This exchange with steyn and laden has been picked up by Twitchy, who has 167,000 followers (primarily a young demographic). There has been huge discussion of the laden episode on twitter as a result, all of which has gone against laden. This tweet sums up the sentiment: Up jumped the climate change true believer >>> @gregladen <<< & member of the wuss generation Clearly they don't like being told what to think. I think this needs more investigation, it is pretty interesting. Wonderful. Children are less tolerant of jerks than adults are. http://twitchy.com/2014/10/04/another-witch-agw-alarmist-exposes-twitchy-as-anti-science-bot-of-some-sort/ laden is such a conspriracy nut This too is fun: http://twitchy.com/2014/09/17/hashtag-backfire-mock-a-lanche-triggered-after-global-warming-alarmist-suing-mark-steyn-solicits-questions/. In case all us white males over the age of 55 also have the attention span of gnats perhaps you had better ensure your articles here have no more than 140 characters then they could also be posted directly on twitter. A tweet a day on climate science, describing all the essentials, would be amusing Judith Curry wonders "how to reach the younger generation"Recommended Jane Goodall's Roots and Shoots programs for children Goodall's programs are immensely well-respected and popular with parents and teachers around the globe. Older male feces-flingers especially take note! Typically vile nonsense. Is lulz a sentiment? Interesting site, the Twitchy: http://twitchy.com/2013/06/27/boom-laura-ingraham-destroys-hero-wendy-davis-with-one-question/ search sarah palin http://twitchy.com/2014/09/21/morons-see-katie-pavlich-destroy-climatemarch-hypocrites-with-their-own-photos/ Giggling madly. Curious George | October 4, 2014 at 10:38 pm | What does it have to do with anything? My daughter is 18 and son 16. They are skeptical about everything the ladder pullers tell them. How to reach younger people: http://www.morris.umn.edu/newsevents/view.php?itemID=13074 I will give Nye this. With such events, he is on target. My son has his ticket. stevefitzpatrick | October 4, 2014 at 10:02 pm | I resent…. err… resemble that! The boring truth is that only very green/liberal or dedicated conservative/libertarian (AkA white over 55, male) people care much about this issue. If warming does not accelerate very soon, then vehement defenses of high climate sensitivity will be relegated to the theater of the absurd, where main stream climate science already has long term residence status. I would develop a video game. The more intellectually inclined seem to like building simulations. But the game has to end with the good guys shooting at an enemy using a projectile weapon. They also seem to like comedy. Somehow one of my posts was linked in an Occupy webpage, but I think that was done by one of my children as a joke. Dick Hertz | October 5, 2014 at 11:20 am | Hey Judy, If you really want to understand this phenomenon, you should get a deep foundation in South Park. The show has a fiercely independent political point of view and it has had a huge influence on the political point of view of the college age demographic. You can watch full episodes online. They are often very crude, but thoughtful. If you choose to watch episodes of South Park from an academic point of view, it's important to understand that the shows are often conceived, written and produced in a weeks time, so they are often immediately topical to the politics of the day. I would recommend you binge watch the entire 20 seasons, but if you just want to focus on global warming or climate change, check out the episodes "Smug Alert" and "Two Days Before the Day After Tomorrow" "Manbearpig" is another special episode. An humage to Al Gore and his book, "A Convenient Lie". jim2 | October 5, 2014 at 11:56 am | Currently "Smug Alert" is available only for purchase. You can try Hulu for free for a limited time. http://www.hulu.com/south-park It's also available on Amazon. http://www.amazon.com/gp/product/B000LVKGCU/ref=atv_feed_catalog?ref_=imdbref_tt_pv_vi_aiv_1&tag=imdbtag_tt_pv_vi_aiv-20 curryja | October 5, 2014 at 12:06 pm | Thx for this suggestion, I've heard of South Park but never watched it. Tonyb | October 5, 2014 at 12:23 pm | I think Family Guy is much funnier than South park although it seems to have something of a sympathy for global warming. Quite how you can utilise either show to tap into the college age demographic is another thmg though Dick Hertz | October 5, 2014 at 3:24 pm | Jim2 is correct, a few months ago south park episodes were all available for free (with commercials). It looks like they have recently made a deal with Hulu, so many episodes are only available through Hulu. And PA is correct that Manbearpig is a great episode delving into Al Gore and his personality issues. Here is an interesting interview with Trey Parker and Matt Stone at some skeptics conference, introduce by Penn Jillette. They don't mention climate change at all, but it gives view of what they are all about, They talk about their ridiculing TV psychic John Edwards and making fun of a variety of religious ideas (including atheism), I would think that some of their commentary on global warming would fall into the category religion. fyi Trey Parker and Matt Stone are the south park creators. Penn Jillette (Penn and Teller) and his showtime show called Bullshit is another example of the kind of skeptical ideals that young people latch onto, but not as big as south park, but interesting if you want to understand what young people gravitate to. It's about the presentation, the humor, the language, and a certain respect for the audience. aaron | October 6, 2014 at 9:14 am | Last week's gluten episode was great. And, of course, the original christmas e-card that launched the whole thing: http://youtu.be/mJbbtEOE4a4 As I am disposed to lecture on and on, my younger crowd says: "Oh DAD" Honing skills to express thoughts as sound-bites does seem to help somewhat. However, when the subject of Climate Change even approaches from a distant theme, the gathering scatters and I am left to mumbling to the dog. I am convinced that it is not what I say, its just that I am saying it. I am the adult is the problem in communicating with younger people. The saying: "don't trust anyone over thirty" resonates today as did yesteryear. In coaxing my youngest grandchild to play, I speak his language and engage one-on-one. I keep in mind what he is interested in and I add some aspect of teaching/learning to the scenario: "what makes the cars and truck and things that go, go?" Speak to the budding football player about the weather, wet grass, dressing warmly and… and: "where does the weather come from?" Engage the child, adolescent, young adult where they are and then add. Add just a little bit, in context. Dr curry, I discussed this with a few teenagers. They suggest you reach their teachers. This applies to high school age. The age group between 30 and 50 is mostly focused on reproduction and cash generation. kim | October 5, 2014 at 4:04 pm | invisibleserfscollar.com The Early Bird has found the worm, even way down South in Georgia. Dan Pangburn | October 4, 2014 at 5:47 pm | The time that passes between when a CO2 molecule absorbs a photon until it emits one is about 10 microseconds. The average time, at sea level conditions, between molecule impacts (which, among other things, conduct energy away from a molecule) is about 0.1 nanoseconds. Thus photon energy absorbed by CO2 molecules near the surface (i.e. tens of meters) is essentially all thermalized, i.e. conducted to non-CO2 molecules that outnumber CO2 molecules 2500 to 1. Thermalized energy carries no identity of the molecule that absorbed it. Discussed further at http://agwunveiled.blogspot.com PA | October 5, 2014 at 9:18 am | http://wattsupwiththat.com/2010/08/05/co2-heats-the-atmosphere-a-counter-view/ Maybe. Or maybe not. Doesn't look like there is going to be a lot of transference. However, a well designed experiment (instead of guessing by physicists) should settle the issue. If the energy from CO2 isn't transferred but reradiated it acts like a time delay which would still produce some warming.. Has anybody done a well designed experiment to measure the degree of temperature transfer from CO2 to N2 and O2? bob droege | October 5, 2014 at 2:40 pm | A fraction of the CO2 molecules are in the vibrational excited states even without being excited by infared, so it takes energy from the other gases and radiates in all directions, some of which make it to the surface warming the earth. The more CO2 the more warming xanonymousblog | October 4, 2014 at 5:55 pm | Here is my response to Raymond T. Pierrehumbert desperation: 1) "but your usual stable of tame skeptics is starting to die off" this is terminal move for raypierre, and is no more than an appeal to authority cloaked in argumentum ad hominem. Incorporating two fallacies in one is impressive, but shows that the interest in objective science is zero. 2) "committee did not include a single physicist who was actually doing work in the area of climate science." This reads like if one is not a "climate scientist" (whatever that is anyways), your opinion is of no consequence. Yet raypierre later describes the unsettled nature of the problem, which is only exclusive to his domain? Raypierre is an expert in the history of climate science, unfortunately it seems he has glossed over the history of science itself. 3) raypierre notes that rate of sea level rise is around 3mm per year. It's a pity someone doesn't infer climate sensitivity from such a robust indicator of change. 4)raypierre cites the APS meeting where Collins noted ""It is virtually certain that internal variability alone," because just heating the ocean alone will not produce this dipole, "cannot account for the observed warming since 1951." Such pseudo scientific statements represent the level of ignorance which has come to dominate this obscure field of study. How on Earth would a warming ocean NOT produce massive amounts of water vapor which would warm the troposphere in exactly the same way the models produce the hotspot? But the fun doesn't stop there, more bizarre claims are made by Santer, when he compares what he calls "natural internal variability" with the hotspot. In actual fact it is simply his idea of internal variability, which in reality is actually trendless whitenoise. Stop the presses! Whitenoise has no trend! Such dishonesty in science is rare. These guys make it into an art form: https://xanonymousblog.files.wordpress.com/2014/05/whatafool.jpg Next up is Held, whose beautiful description of negative feedback is ruined by his stubbornness to actually allow it to work: https://xanonymousblog.files.wordpress.com/2014/05/morefools.jpg In summary, it's unlikely raypierres essay will elicit any sort of professional response. As Dick Lindzen has noted, if there is little substance to the claims, then don't bring attention to them…. Spence_UK | October 4, 2014 at 6:59 pm | Wow, they still think that GCM control runs are a good basis for assessing internal natural variability? Astonishing! GCMs demonstrably do not capture the autocorrelation properties of the climate system and therefore are completely inappropriate for use as a null hypothesis. This type of incestuous testing has no merit or value whatsoever beyond self-delusion. Oddly – the science is settled sufficiently. The obvious approach is to get some environmental goals on the board – restore soils and ecosystems – and to foster energy innovation. http://judithcurry.com/2014/10/03/challenging-the-2-degree-target/#comment-635010 Best analysis of the global warming community ever: "…there is a massive plot by huge multinational environmental corporations, academics and hippies to deprive you of the right to drive the kids to school in a humvee…" ianl8888 | October 4, 2014 at 7:55 pm | Butterworth is being sarcastic here, don't you know ? "Although climate scientists update, appropriately, their models after ten years of evidence, climate-science communicators haven't," said Dan Kahan, a professor of law and psychology at Yale who studies how people respond to information challenging their beliefs. Luckily, social and political psychologists are on the case." Stupendously conceited mullahs of scientism like Dan Kahan from stupendously conceited institutions sell soft waffle as scholarship. It's what they do for a living and we must be used to that by now. They're in the pay of Big Smug. But the NYMAG article is starting to sound like a lead-up to re-education camps. Of course, they'll try convincing conservatives by appealing to their self-interest first. Perhaps we can become shallow and brittle replicas of our New Class betters. But if that doesn't work… JustinWonder | October 4, 2014 at 7:50 pm | Monomoso – "They are in the pay of Big Smug." Big Smug? That is pretty funny! Did you invent that? Regarding sticking to old beliefs, especially involving numbers, that is called "anchoring". The persistence of anchors, once introduced, is amazing. Big Al got way out in front with his sci if movie. Those old CAWG ideas are dug in deeper than an Alabama tick. We already have re-education camps, aka sensitivity trainIng. South Park – "Smug Alert" Ah, yeeeeze… born only to pursue knowledge. Kahan spends his time studying why people disagree with Kahan and how they can best be cured of this infirmity with firmness but compassion. I get how juveniles can be entranced by his self-satisfaction and twaddle, but this guy and his ilk get taken seriously by people of adult age. Maybe not adults, but certainly of adult age. johanna | October 5, 2014 at 12:06 am | "Big Smug" – good one, Mosomoso. :) A while ago I wandered over to Kahan's site and read a few of his articles and the subsequent comments. What a load of twaddle! He is your classic academic obfuscationist, never uses a short word when a long one (or phrase) will do; regards disagreement as a failure of communication instead of a genuine disparity of viewpoints; and has brought the art of condescension to a whole new level. He's a fairly successful con artist who is not nearly as clever as he thinks he is. Hilary Ostrov shredded him a few times while simultaneously trimming her cat's toenails and cooking dinner. When somebody in the backseat chants "we going to drive off a cliff just ahead", for 17 miles (or chants global warming for 17 years) … you stop, chuck them onto the shoulder, and continue on. What if we'll never know? stevefitzpatrick | October 4, 2014 at 8:13 pm | I think you are mistaken. I followed the link to the Guardian blog; same venemous accusations, especially on the left. As a climate scientist, you are, at least tangentially, part of a political cabal that is focused only on implementing a specific set of green policy goals. The disturbing part of the story is that even among obviously qualified climate scientists, the scientific analysis is subservient to the political calculus (think Linzen versus Santer). The field is not well. IMO, only political intervention will fix 'the science' in the short term. Whether that intervention is politically possible will depend in large part on the November elections for the US Senate. In the long term, nature will laugh at those who insist on high sensitivity; sometime thereafter, so will everyone else. The only important question is how much economic damage will be done before everyone is laughing at high sensitivity. This is how a hoax dies. Very, very slowly, unfortunately, and more dangerous in its death throes. Fixing the World, Bang-for-the-Buck Edition: How can we do the greatest good? See: Bjorn Lomborg's New Freakonomics Radio Podcast "Here's $2.5 trillion. You have 15 years to spend it. How do you distribute this money in a way that will achieve the most good for the world?" I just did a podcast with Freakonomics on my think tank's "Post-2015" project on the Sustainable Development Goals. It is the podcast for the #1 selling Freakonomics book, a #1-ranked podcast, with more than 5 million monthly downloads. One of the comments the listeners left reads: "FASCINATING Freakonomics podcast this week. With so much bad news lately, it was heartening to hear that people are spending so much time and effort to fix things, to fix things right, and to make the most impact possible." I hope you enjoy it too http://freakonomics.com/2014/10/02/fixing-the-world-bang-for-the-buck-edition-a-new-freakonomics-radio-podcast/ There is no end to the good ideas for spending some else's money. Wagathon: There is no end to the good ideas for spending some else's money. There are at least as many bad ideas. The point of the exercise is to weight the alternatives, and their likely outcomes, and their likely costs and benefits. California, for example, has rushed into massive spending projects that are not likely to produce benefits exceeding their costs in the next 40 years, while neglecting its water control infrastructure. There ought to have been more debates about the costs and benefits of these alternatives. In California the entire government-education complex is a bridge to nowhere. JustinWonder | October 6, 2014 at 2:18 am | And it ain't cheaper neither. But, someday we may have a high-speed ( high on speed?) train to nowhere for that bride to nowhere. So we got that going for us … …ain't cheap…I meant to say. Skiphil | October 4, 2014 at 10:51 pm | Ben Santer shows up to make a critical comment at WUWT (it does appear to really be him, judging from the detail in the comment and the specific anecdote related): Ben Santer comment at WUWT The last thing anyone should ever do is misrepresent the air of certainty and urgency that all on the Left have tried for years to cultivate by knowingly providing false, incomplete, misleading and sometimes simply made-up facts and information to create public alarm. ah, the large number of volcanics eruptions over the last 17 years and dim sun is to blame. How low can you go Ben? omanuel | October 4, 2014 at 11:17 pm | Professor Curry, as Czech President Klaus concluded several years ago, "It's about freedom." http://junkscience.com/2014/10/04/roy-spencer-on-how-people-seem-to-not-care-about-the-warming-crisis/comment-page-1/#comment-274546 Interesting article on Judith Curry: SCIENCE: A woman in the eye of the political storm over climate change A fan of *MORE* discourse | October 5, 2014 at 12:01 am | Let's pull the pieces together! A woman in the eye of the political storm over climate change "Curry and the other scientists agree on the basics of the science. They are quibbling over the uncertainties." Belief, bias and Bayes Cohort I If you have a prior assumption that modern life is rubbish and technology is intrinsically evil, then you will place a high prior probability on carbon dioxide emissions dooming us all. Cohort II If your prior bias is toward the idea that there is a massive plot by huge multinational environmental corporations, academics and hippies to deprive you of the right to drive the kids to school in a humvee, you will place a much lower weight on mounting evidence of anthropogenic climate change. Cohort III If your prior was roughly neutral, you will by now be pretty convinced that we have a problem with global warming. FOMD allies with Cohort III … along with an overwhelming majority of the world's STEAM professionals *and* common-sense citizens. Pretty much *EVERYONE* appreciates these POLITICAL realities, eh Climate Etc readers? Meanwhile The seas keep rising, the oceans keep heating, and the polar ice keeps melting … all without "pause" … all without obvious limit … all showing us a Hansen-style reality of sustained energy imbalance. Pretty much *EVERYONE* appreciates these SCIENTIFIC realities, eh Climate Etc readers? Skiphil | October 5, 2014 at 12:27 am | FOMD, nice but your "Cohort III" is mis-described. Plenty of us had priors (me for instance) that ranged between neutral and "there probably is a problem since there is so much noise about a problem" — yet do not end up in your camp that there is evidently any (serious) problem that can and must be dealt with by concerted state actions. In other words, your set of cohorts form a "straw man" type of analysis…. but the comment is entertaining, I'll give you that. Well, the cohort three (III in roman numerals is three) definition was written by someone who is somewhat biased. Around 2000 when I started looking at this – it looked like AGW had some game. After 14 years: 1. The AGW crowd has to puff up surface temperatures, 2. The AGW crowd has to puff up sea level (adding GIA to the sea level rise is outright lying). 3. The arctic sea ice volume is expanding. The 2014 minimum is over twice the 2012 minimum volume. 4. The antarctic sea ice extent is at record levels. 5. The amount of East Antarctic land ice (which is mostly land locked and unmoving) is increasing. East Antarctic land ice is basically permanently removed from the climate system. At some point the East Antarctic accumulation (90% of land ice) will offset the West Antarctic and Greenland melting. The Greenland core is also landlocked and getting thicker. 6. The GCM models have been consistently wrong for the entire 21st century. If we see some real raw data warming by 2030 maybe they have some game, but right now CAGW (catastrophic global warming) appears to be dying as a theory. AGW (a little man-made warming) really doesn't justify taking any actions. We will move away from fossil fuels gradually and that is just fine. Let nature (or economic forces) take their course. Demonizing CO2 seems to be some sort of religion. It's an opiate to ease the pain of absolutism. PA reminisces "Around 2000 when I started looking at this – it looked like AGW had some game. After 14 years … … record global heat and accelerating ice-mass loss (affirmed by multiple independent studies and measures), all precisely as climate-scientists predicted. Your insights are appreciated PA. Hypothesis Perhaps the global commie/green/liberal conspiracy is more vastly corrupt than even "Cohort II"s market-fundamentalist stalwarts foresaw? The world wonders! Lithium isn't just for batteries don't ya know. omanuel | October 5, 2014 at 12:44 am | Thanks, Skiphil, for the link. In my opinion, Professor Curry is the hero in the Climategate story. http://www.eenews.net/stories/1060006489 Rob Ellison | October 5, 2014 at 12:31 am | Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, "These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it." http://earthobservatory.nasa.gov/IOTD/view.php?id=8703 It is difficult to imagine that climate is at all predictable against a backdrop of vigorous natural variability – and it is not as if the rate of warming is all that striking. I am inclined to take the high point of the early century warming – 1944 – as a starting point and the late century high point – 1998 – as the finish. This accounts for both a multi-decadal cooling and warming period. Surely – there is an obvious rationale there. We may even assume that all of the warming between 1994 and 1998 was anthropogenic – unlikely as that is – to give a warming rate of 0.07 C/decade. Well short of 2 degrees C anytime soon – especially as the oceans are contributing to surface cooling for decades seemingly. http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4.png I am inclined to just move on entirely from the rhetoric of catastrophe. There are plenty of things to be getting on with. Trade, development, progress and and ecological and soil conservation all bring environmental benefits – but are clearly not the prime objective. Targeting greenhouse gases would send entirely the wrong message. We might also for economic reasons encourage energy innovation. We might then see more progress on social and economic development and some on biodiversity. xanonymousblog | October 5, 2014 at 12:40 am | https://xanonymousblog.files.wordpress.com/2014/05/life-cycle.jpg 'have kids with environmental green lawyer/vegan.' Surely not! Are we not an abomination to an already overpopulated Gaia? '… Overpopulated by humans, that is. xanonymousblog | October 5, 2014 at 2:18 am | damn it ……forgot climate tourism……. mosomoso | October 5, 2014 at 2:32 am | And don't forget "green jobs"; an invisible phenomenon which occurs in the wake of industrial self-harm. Faustino | October 5, 2014 at 5:26 am | moso, the green jobs are invisible only because CO2 has stimulated background vegetation growth. Alexander Biggs | October 5, 2014 at 1:20 am | Antarctic sea ice increasing. There are more people and more industry in the N hemisphere, hence more waste heat. But total global heat output is roughly in equilibrium with total heat from the sun, so as the N hemisphere gets warmer, the Southern gets colder. David Wojick | October 5, 2014 at 9:03 am | I do not see why waste heat in the NH would cool the SH. By what mechanism might this happen? Through the Lens (@woodyjohn1) | October 5, 2014 at 5:35 am | Global warming? i'm still waiting for the Millennium Bug. Tuppence | October 5, 2014 at 5:49 am | That Carbon Brief link…. Seems CB dedicated to blinkered alarmism – any comment not in that vein is just deleted. thisisnotgoodtogo | October 5, 2014 at 7:14 am | The nonsense article "Psychologists Are Learning How to Convince Conservatives to Take Climate Change Seriously" By Jesse Singa, says "…in practical, apolitical contexts, many conservatives already recognize and are willing to respond to the realities of climate change. "There's a climate change people reject," Kahan explained. "That's the one they use if they have to be a member of one or another of those groups. But there's the climate change information they accept that's just of a piece with all the information and science that gets used in their lives." A farmer approached by a local USDA official with whom he's worked before, for example, isn't going to start complaining about hockey-stick graphs or biased scientists when that official tells him what he needs to do to account for climate-change-induced shifts to local weather patterns." However, in an article called "Why Don't Farmers Believe in Climate Change? And does it really matter whether they do? By David Biello, we see an entirely different story to Kahan's rubbish: "Take, as an example of skepticism, Iowa corn farmer Dave Miller, whose day job is as an economist for the Iowa Farm Bureau. As Miller is happy to explain, it's not that farmers in Iowa don't think climate change is happening; it's that they think it's always been happening and therefore is unlikely to have much to do with whatever us humans get up to down at ground level. Or, as the National Farm Bureau's spokesman Mace Thornton puts it: "We're not convinced that the climate change we're seeing is anthropogenic in origin. We don't think the science is there to show that in a convincing way." (Given the basic physics of CO2 capturing heat that have been known for more than a century and the ever-larger amounts of CO2 put into the atmosphere by human activity, it's not clear what "science" he's holding out for.) The numbers back that up: When Iowa State University sociologists polled nearly 5,000 Corn Belt farmers on climate change, 66 percent believed climate change is occurring, but only 41 percent believed humans bore any part of the blame for global warming. It's not just the Corn Belt: Farmers across the country remain skeptical about climate change. When asked about it, they tell me about Mount Pinatubo and weird weather in the 1980s, when many of today's most established farmers were getting their starts. But mostly I hear about cycles in the weather, like the El Niño–La Niña cycle that drives big changes in North American weather. Maybe it's because farmers are uniquely exposed to bad weather, whether too hot or too cold. Almost any type of weather hurts some crop; the cereals want more rain, but the sweet potatoes like it hot and dry. Year-to-year variability in the weather dwarfs any impact from a long-term shift in the climate. Consider this: A farmer in Iowa might deal with a 10-degree-Fahrenheit shift in average temperatures from year to year, so why worry about a 3- or even 4-degree shift over 100 years? As the old saying goes: If you don't like the weather, wait five minutes and it will change. The long-term prediction for the Corn Belt in Iowa says that the weather will get hotter and drier—much like western Kansas is currently. Yet, over the decades of Miller's farming career, conditions have been increasingly wet. "If I had done what climate alarmists had said to do, I would have done exactly the wrong thing for 20 of the last 25 years," Miller says." Dan Hughes | October 5, 2014 at 7:50 am | An interesting comment over at . . . and Then There's Physics. In reply to this comment by Rob Painting, Steve Bloom says: Tipping points, Anders, not in the models. Rob, that model finding for the Amazon was recently overturned by observations, [edh bold] Explains a lot. All calculated model results are apparently considered to be true findings prior to Validation. While at the same time completely skipping over all aspects of Verification. Love the wording, tho, overturned by observations, I've got to work that into a sentence sometime. A long version of, wrong. Indeed, modeling results are merely conjectures but they are treated as established facts, or even as observations. This is how modeling has come to dominate the science. Stephen Segrest | October 5, 2014 at 9:34 am | Tony B — You previously asked for some links of incendiary statements being made by Republicans (i.e., Conservatives) on AGW. A while back, I created a jpg picture of things being said to a general public audience ("Fraud, Hoax, Scam, Junk Science, God tells us it can't be true, etc."): http://www.treepower.org/blog/teapartyscience1.png In the "context" of the general public debate, how should advocates of AGW responded to these incendiary statements? It is in this context of responding to a message of "Junk Science, Hoax, etc." that the phrase of "scientific consensus" has been mostly used. Anybody following my comments here at CE know that I don't think much about Wagathon (and many others like him). They are not true Conservatives, but radical Ideologues — wanting to re-fight the American Civil War. For true Conservatives, the problem isn't junk science from liberals (per Wagathon), its the "Junk Thinking" by people like Wagathon. From the Get-Go, GW was hi-jacked by liberal ideology in command/control policies like Carbon Taxes or Carbon Trading. Conservative Leadership has never developed a clear and consistent pro-active message and push hard with conservative principled policies to approach AGW. Two examples of "No Regrets" approaches reflecting Conservative principles could be (1) Fast Mitigation of reducing methane, black soot, smog, etc. and (2) using international trade for high economic growth to encourage low carbon economies (e.g., exporting U.S. technology to developing countries in exchange for them having greater access into our markets). Hank Zentgraf | October 5, 2014 at 9:47 am | Stephen, advocates of AGW might consider responding to "incendiary statements" by opponents the same way skeptics respond to AGW advocates who say the science is "settled". Try respectful reasoning with evidence that follows the rules of science and statistics. The defining element of the global warming hoax has always been the pretense that enlightened governments acting through the auspices of the UN under moral authority of Western science, can and should throttle-back modernity to prevent future climate change. Who shall decide our individual fate – government scientists on cell phones in ivory towers, sporting laptops and clouds full of phony data and pushing an evergreen buttload of public-funded research proposals about saving the Earth from human depredation? climatereason | October 5, 2014 at 10:43 am | I don't think I agree with any of the statements around your central circle other than that I would observe that alarmists can push their beliefs with messianic fervour, so in that respect it is akin to religion. Peter Lilley one of the few sceptics in the UK parliament wrote this in the Huffington post. http://www.huffingtonpost.co.uk/peter-lilley/global-warming-religion_b_3463878.html Basically, the belief seems to be stronger than facts. Generally, our politicians don't have the fervour some of yours do. I mostly agree with your 1). As regards your 2) regarding International trade, most developing countries want to export agricultural products and the US has been at the forefront of refusing deals that might impact on your farmers so not sure how that would work out. In any case high tech products tend to need hi tech people to supply install and run them and those are often in short supply locally. Having said that, improving a peasants standard of living generally reduces their need to have a large family and improves their overall health and wealth and perceptions. So I am all in favour of improving their lot, however I think that it will be a long time before fossil fuels can be genuinely replaced in developing countries. Stephen Segrest | October 5, 2014 at 10:31 am | Wagathon is advocating playing a very dangerous "High Stakes" "Winner Take All" game. If Conservatives don't develop a pro-active leadership position of AGW with conservative principled policies the outcome will be catastrophic if public opinion turns demanding action. Without pro-active conservative leadership (with definitive policies), the only thing out there are the liberal policy approaches (e.g., carbon taxes). Conservative principles: Bottom-up, De-centralized, Flexible, Reward Based. Given that CAGW is an intrinsically liberal policy there is no way for conservatives to be "pro-active," rather they are anti-active. This is why the policy is stalemated for now. Just to refine the point, Stephen, your argument seems to be that this is going to happen whether conservatives like it or not so they should work to get it done their way instead of fighting it. Your premise is false. mosomoso | October 5, 2014 at 11:30 am | Yes, David, I'm afraid these people will have to establish their case. I don't want to be pro-active or on-board with agendas others have cooked up. I want to wholeheartedly oppose agendas with which I disagree. "No regrets" is just a lure. I don't like soot, smog or poverty. I do like kittens and warm mugs of cocoa. That's got nothing to do with believing in hockeysticks or in ruinous toy technologies offered as "solutions" to confected problems. Before another hurricane whips across Leyte I'll happily see western aid money go to pinning more roofs on Leyte. I don't want to respond by sending Australian money into a carbon scam run by Goldman Sachs or the European Union in the hope of dialling up a stable climate which has never existed. To be a true conservative is to be a serial appreciator: tto appreciate when the flick of a switch makes light, without smoke, flame or noise, to appreciate wealth, industry, chemicals and coal as much as the natural world (eg this Australian bush I live in and love). So, enough fetishism, waste and white elephants, okay? Well, lets apply the 3.5°C IPCC average for 2100 to how much the temperature should have increased in 2014. 14/100 * 3.5 = .49°C. The danger level is 2°C 14/100 * 2 = 0.28°C The level by 2020 for 2°C is 0.4°C. We aren't going to get close to any of these numbers. The AGW crowd needs to explain why we should be solving a non-problem. Discussing solutions of a non-problem is premature. Your argument a non-starter from the get go in defining conservatives as Republicans and using the moniker, "Tea Party," more as a denigrating ad hominem than a convenient and informative generalization. The Tea Party isn't even a party. To put more denotative authority into your language, it would be more accurate to equate the Tea Party with neither the Democrat nor the Republican party. But, even when you get past the shallow thinking you argument is more proof than a challenge to the fact that society is fundamentally dishonest. Your point 1) for example: the first step in SS, the problem the AGW crowd has is they aren't rational. Rational people prove there is a problem before they start devising solutions. The AGW crowd regards proving there is a problem as a "pro-forma" step that can be skipped. This allows them to dive right into solutions. It is not a pro-forma step. AGW advocates must prove there is a problem, and they haven't made their case. Before we destroy our economy and waste our money the AGW crowd has to prove the CO2 forcing is as high as they predict. Right now the evidence isn't on their side. True, true, to get beyond the obvious fact that global warming is nothing more than a Left versus right issue and therefore has nothing to do with science, you must explain how self-described "conservatives" can be comfortable with the fact that belief in AGW theory represents a celebration in the abandonment of the scientific method as if it has no societal implications at all beyond the global warming debate. … mitigating black soot would be for the Left to agree that natural gas is a boon to all humanity. This week saw the 18th anniversary since the Earth's temperature last rose – something that Dr Benny Peiser, from the Global Warming Policy Forum, says experts are struggling to understand. He explains that we are now in the midst of a "crisis of credibility" because the global warming – and accompanied 'Doomsday' effects – that we were once warned about has not happened. Scientists from the Intergovernmental Panel on Climate Change (IPCC) once predicted a temperature rise of 0.2 degrees per decade – but are now baffled by the fact our planet's temperature has not increased for almost two decades. Speaking exclusively to Express.co.uk, Dr Peiser said: "What has happened is that the public has become more sceptical because they were told we are facing Doomsday, and suddenly they realise 'Where is the warming that we were promised?'" "They say we can predict the climate and the reality is that they can't." Because of this so-called "global warming hiatus", Dr Peiser says climate change is not as pressing of an issue as it once was, a fact that should be embraced by the scientific community. http://www.express.co.uk/news/nature/518497/Exclusive-interview-with-Dr-Benny-Peiser Howard | October 5, 2014 at 12:33 pm | At Revkins Dot Earth, Victor replies to Romulus and Remus regarding his Nature essay on the 2-deg limit. This is an excellent example of adult behavior. Victor versus R&R The oening says it all. The rest are all the bloody details. An eyeopening read. Along with the Great Ramanathan, it appears that UCSD is the center of the Sane Climate Policy World. First, before digging into the substance, I think it is deeply disturbing that both these posts use the same tactics that are often decried of the far right—of slamming people personally with codewords like "political scientist" and "retired astrophysicist" to dismiss us as irrelevant to the commentary on a matter that is for climatologists. Other scientists, by contrast, are "internationally renowned" (quote by Joe) implying somehow that we are peripheral thinkers. People can say what they like about us, but we have not been irresponsible unqualified hacks weighing in on this matter. Getting serious about goals requires working across disciplines—especially between the natural sciences and the social sciences, which are about human behavior (which is what, ultimately, these goals are trying to change). The failure to do that effectively is one of the reasons why climate science hasn't been more directly linked to policy. Willard | October 5, 2014 at 12:43 pm | I doubt that Victor's distinction between "before substance" and substance cuts any ice, Howard, but that's a prety damn good example of how unnecessary roughness can turn against a ClimateBall ™ player. INTEGRITY ™ – We Code Words Parseled out by the sou. GaryM | October 5, 2014 at 1:34 pm | One of the central political questions of our time (posed over 150 years ago), by way of NRO: "In his magnum opus, The Law, Frédéric Bastiat wonders about the nature of the bureaucrats responsible for onerous regulation: 'If the natural tendencies of mankind are so bad that it is not safe to permit people to be free, how is it that the tendencies of these organizers are always good? Do the legislators and their appointed agents not also belong to the human race? Or do they believe that they themselves are made of a finer clay than the rest of mankind?'" http://www.nationalreview.com/article/389528/color-money-josh-gelernter Vanity, the belief in one's own superiority, is the core of progressivism, + 1,000! Gary, as one who has worked for UK, Australian and Queensland governments, I am well aware of the failings of bureaucrats (and politicians), my experience over 50 years is the main reason I argue for smaller government. I have met some ministers and public servants who have a genuine interest in community well-being, and the nous to pursue sensible policies. But they are in the minority, and rarely prevail. The majority are self-serving jobsworths. What amazes me is how few people are prepared to accept this, they cling to faith in governments against all the evidence. I think that almost all (at least) indicators of well-being would improve with smaller government, which would help to foster self-reliance, initiative and entrepreneurial skills. Too often these critical skills are crushed or discouraged by government, only winding back government can change that. I am the president and only member of the 'plague on all your parties,party' Increasingly we are lumbered with self serving career Politicians who often go straight from university to their party and have little experience of the real world. Many of them have little common sense or are particularly brigh which all becomes a potent brew when combined with their ideology Thank you Faustino Some months ago I argued this very line, but you then disagreed Please don't tell me you were being disingenuous earlier on – it would destroy my naive faith in the goodness of the Public Service BTW, the dream of smaller government is as a sword to Don Quixote Yes, less is more, Faustino, less guvuhmint that is. Innovation and productivity while the leaders are otherwise engaged, like in China pre the Ming and the beginnings of commerce with Constantinople by those wily Venetians out in the marshes long ago. No Venice, perhaps no growth of Italian cities or the Renaissance. Ian, you must have misunderstood me, I have been consistent in this for many years. I recall I did undertake to respond to you on another issue, sorry I haven't, I will if I can. Yes, Tony, I posted at NRO and added: " Increasingly, apparatchiks dominate." It seems to have got worse in recent decades in Oz, the UK and the US; no need to mention the leviathan EUreaucracy. Peter Lang | October 5, 2014 at 7:25 pm | Garym and Faustino +1000 each ianl8888, Faustino has been presenting this point consistently since I first noticed his comments. I suspect you may have misunderstood what he meant in the previous comment you referred to. The problem in the US is there is no "small government" party. Both the main political parties are basically big government parties, the difference is which hogs feed at the trough. The more the Tea Party pushes at least the Republicans toward small government the better. I don't see any tendency or influence that will push the Democrats toward sanity. Actually, there is: the Libertarian Party. I haven't been associate with the Libertarian Party since the '70's, although I consider myself a libertarian. The relationship between Libertarians and the Tea Party is complex. But, it has become, harder and harder to misinterpret the signs – as Al Gore would say – that these climate priests high up in Western academia's ivory towers are really bad at math. The amount of government involvement and money that is going into to underwriting the added expense of alternative energy is ruinous much like chewing off your arm to get more protein in your diet. Mark Steyn in 'The Spectator' likened the mindset of global warming alarmists to being in first-class staterooms aboard the Titanic and rooting for the iceberg. That is why "Manbearpig" (South Park) was so funny. They used climate activist math. ALGORE's introductory statement on the topic: "It is a creature which roams the earth alone. It is half man, half bear, and half pig. Some people say that Manbearpig isn't real. Well, I'm here to tell you know, Manbearpig is very real, and he most certainly exists. I'm serial." Half man, half bear, half pig? Oh really? Yes, if you have the right ALGOREITHM anything is possible. omanuel | October 5, 2014 at 5:58 pm | Definitive Data Settles the Debate: http://stevengoddard.wordpress.com/2014/10/05/the-definitive-data-on-the-global-warmingclimate-change-scam/ Physicist with 50 years experience | October 5, 2014 at 6:07 pm | People should heed the work of the brilliant 19th century physicist who was first to determine the size of air molecules. Josef Loschmidt was also first to explain (indirectly) through his gravito-thermal effect the answer James Hansen et al sought as to why planetary surfaces are hotter than radiating temperatures. We don't need Hansen's hypothesis about back radiation and the consequent (but necessary) garbage about the Second Law applying to a combination of independent processes. What is in this comment has profound consequences. Think on it! You're arguing for the possibility of a perpetual motion machine? In conclusion, below-average nuclear power outages throughout the shoulder season will likely contribute to lower natural gas demand despite record high production. While their overall contribution to natural gas demand and direct impact on price is relatively small, it comes at an inopportune time for the bulls and is one less source of demand that the natural gas longs can count on to correct the present supply/demand mismatch. http://seekingalpha.com/article/2540405-nuclear-reactor-outages-and-their-impact-on-natural-gas-price-another-strike-against-the-bullish-case Stephen Segrest | October 5, 2014 at 7:28 pm | Wagathon, Jim2, David Wojick, PM, and Others: Since much of the CE blog community is not from the U.S. — spend a little time with non U.S. viewers on (1) the U.S. Supreme Courts decision to uphold the Environmental Protection Agency's Authority to regulate greenhouse gas emissions; (2) How our Political System works (e.g., especially how our Senate works on majorities between 60% and also 66% to overturn a Presidential veto). Talk about how the EPA is taking a Regional approach to greenhouse gas emissions — and tell us why you believe that things like regional cap and trade (financial derivatives) can not possibly come out of EPA Regs. Given the Supreme Court's willingness to "overlook" the Constitution, just about anything can happen. That doesn't mean it should happen or that it's right.  The EPA concedes that following the proposed rules would have no more than a negligible effect at most on climate change and the amount of atmospheric CO2. But, the compliance costs and disruption to the economy could be huge. Will global warming continue to be a plank in the Democrat platform when it's obvious AGW isn't about CO2 and the climate but really about the Left's belief capitalism is a disease? ianl8888 | October 6, 2014 at 2:34 am | That's why power corrupts, Jim For a longer or shorter period, it (the exercise of power) has no accountability. Voting a change every 3-4 years cannot undo the careless damage done before There is no sensible answer for any of this, it's the human condition Wagathon — Your argument is exactly why Conservatives need to develop pro-active policies on AGW — and not simply have a strategy of, or be viewed as obstructionists. Without doing this there is a great big void, of no conservative alternatives — only liberal options (e.g., regional cap & trade financial derivatives). There is nothing wrong is saying "We believe AGW is occurring — but our scientists are telling us they are unsure of how much or how quickly". Fast Mitigation policies are an example where we can "add time on the clock" before decisions on things like carbon taxes, cap & trade have to be made. Dr. Ramathan says maybe 20 years can be gained to let our scientists and engineers "catch up" with this Wicked Problem. The fishing/forest/agriculture sector, just the producer prices for products sold,, at GDP, not GWP, is about $ 3 trillion dollars a year. The.retail price level, using GWP, capturing the fact that for most of world which is at a subsistence level personal consumption (instead of producer sales) is very high, the number goes north of $ 9 Trillion. The 50% increase in plant growth since 1900 due to CO2 represents 1-3 trillion dollar benefit. More CO2 (up to about 1000 PPM) will increase this benefit. The AGW proponents should be asked point-blank why they are trying to starve people and reduce the standard of living. They should be forced to demonstrate sufficient harm to offset the benefits of increased CO2. This analysis doesn't include the benefits to wildlife that a richer more vibrant CO2 enhanced environment provides. More plants means more food for all the animals on the planet. PA — As the EPA develops rules on greenhouse gas emissions (which the Supreme Court said they have the authority to do in 2007) — how effective do you think your arguments will be as they do this? Reality-challenged people are not open to rational arguments. Anyone who claims CO2 is pollution has a seriously delusional viewpoint. All that can be hoped for is that congress can restrain the administration for 2 years. The next administration will hopefully downsize the EPA and some other government agencies and remove the bureaucratic drones that are causing problems. PA — "HOPE" is not a gameplan. The EPA has no evidence of net harm, It has 17 years of evidence the problem is greatly exaggerated. Don't have a good game plan for dealing with agenda-driven people who lack common sense and good judgment. The only solution is to fire them and that isn't likely in the current environment. All we can do in the meantime is slow them down and endlessly point out the absurdity of the baseless damaging polices that are being proposed. Stephen might be right. Perhaps people should be sueing the EPA over it's attempts to regulate CO2 as a pollutant. PA, Jim2, Aaron — The EPA certainly isn't delusional in regulating CO2 and neither was the U.S. Supreme Court in upholding this authority in 2007 (before Obama was even elected). The Clean Air Act defines pollutant agents (in terms a only lawyer would love) and their impact on either weather or climate. http://www.law.cornell.edu/uscode/text/42/7602 Nothing wrong with your disagreeing in the interpretation on CO2 regulation — but AGW Advocates and SCOTUS opinion that Congress intended it to be regulated is not simply delusional left wing liberalism as you argue. Peter Lang | October 6, 2014 at 8:33 am | Do you have any good, authoritative estimates of the compliance cost of carbon pricing, and of GHG emissions monitoring that would ultimately be required for commerce in the commodity CO2-eq? I've been playing around with this a bit, but have no reliable estimates to work with. Peter — While I'm no Guru on EPA Regs (especially since they've not even been written yet) — the EPA website can give you a general feel to their thinking where policies must at least break even as to cost/benefit: http://www.epa.gov/climatechange/EPAactivities/economics/scc.html Just an estimate of the probable job losses as a result of proposed EPA regulations. Stephen Segrest , Thank you for EPA the link. However, it does no deal with the compliance cost. It covers the benefits and abatement costs assuming compliance costs are nil, as do the widely used IAMs. I suspect the compliance cost of emissions monitoring would become huge as smaller and smaller emitters are included. EPA once estimate that EPA's costs (not the cost to businesses or to all the other public and private sector organisations that are involved in compliance and who analyse and use the data) an $21 billion per year (budget increase) to monitor 6.1 million emitters. That is how many emitters would be included if the Obama legislation was applied. EPA tailored it down so they now monitor 8000 emitters. 8000 emitters covers just 49% of USA emissions. I don't know what proportion of emitter would be included by 6.1 million emitters (all emitters more that 250 tonnes p.a.). It seems to me that even at 1/10 of the EPA's estimate, the compliance cost of emissions monitoring would greatly exceed the revenue from carbon tax – and the social cost of carbon. And we haven't included the effect it would have on global economic growth. Inevitably, there ill be disputes over who's cheating and who's not measuring sufficiently accurately and precisely and who's not properly including the cost in the price of traded goods and services. Inevitably that will lead to trade disputes (e.g. EU's attempt to make everyone flying into EU pay for the EU's carbon price). Will Puttin pay? there will always be a Puttin somewhere. Has any one seen reliable estimates of the compliance cost of GHG emissions monitoring as the world approaches monitoring 100% of GHG emissions? Hi Peter — Here in the U.S. we have a requirement that at a minimum, cost/benefit must be a break-even. So, you would at least have a boundary. Also — I thought I saw where the EPA said while they had not evaluated the cost of a carbon tax, they had evaluated cost under a cap & trade system. I thought I saw a number of $20 per ton? What Liberals never really address: 1. Any way you package a carbon tax, it would be a regressive tax — hurting the poor. 2. What about the competitiveness impact on domestic manufacturing? Would a carbon tax result in increased imports — where CO2 emissions are just being outsourced to a developing economy (like China)? 3. A Cap & Trade System is just a financial derivative and would be a new play toy of Wall St. We saw what financial derivatives did in bringing down World economies. You are talking about the abatement cost and benefits of carbon pricking. You are not talking about the compliance cost of measuring, reporting, enforcing compliance, disputation, etc. These should be added to the Abatement Cost, but they are not included in the abatement cost. I have these questions: • What is the compliance cost of GHG emissions monitoring? • What would the compliance cost become in the future as participation increases (As Part 1 explained, near full participation is essential for carbon pricing to succeed and be sustainable; full participation means all human caused sources of GHG emissions in all countries are measured and priced)? • What would be the ultimate compliance cost of near full participation? • What would be the real cost to society of emissions monitoring? • Is there an alternative policy approach that would not require emissions monitoring to the standards needed for commerce, trade? EPA's estimate of an additional $21 billion per year to monitor 6.1 million emitters doesn't include businesses' compliance cost to monitor and report their GHG emissions. Nor does it include the cost to all the other public and private sector organisations that would be required to monitor and report compliance and have various roles in policing, accounting, routine legal services, dispute resolution, litigation, court time, penalty enforcement, etc. Nor does it cover all the organisations that would be involved in analysing the data, reporting to clients and stakeholders, maintaining data bases, and updating legacy systems forever. Nor does it consider the costs involved in international disputation and conflict resulting from countries not complying with the global rules – e.g. there will inevitably be other world leaders like Puttin, sometime somewhere; what's the total costs to everyone else when a Putting refuses to participate? The estimates do not include the cost of trade disputes, trade barriers, and reduced global economic growth caused by trade disruption and barriers to free trade. JamesG | October 6, 2014 at 8:43 am | So Pierrehumbert asserted that Koonin had only read skeptic blogs when Koonin in fact reached his conclusions by interrogating a wide range of climate scientists. This is either serious disinformation or lazy bloviating by Pierrehumbert. His blinkered attitude speaks volumes about the merit of his opinion. The biggest skeptic, alas for him, is mother nature! Wagathon — As Mosher repeatedly says, its always more productive to be "sitting at the table". EPA greenhouse gas regulations are going to be developed with or without Conservative participation. Wagathon — Last year in testimony directly to the U.S. Congress, EPA Administrators from every Republican President said that AGW is a serious problem and Congress should take action (Ruckleshaus under Nixon, Thomas under Reagan, Reily under George H.W. Bush, and Whitman under George W Bush. This just doesn't "fit" under your ubiquitous liberal conspiracy theory arguments. These EPA Administrators certainly don't stand alone — similar views are voiced by many Conservatives such as "The American Conservative Magazine", Michael Gerson (Washington Post), David Brooks (N.Y. Times), etc. It is not a conspiracy anymore than ISIS is a conspiracy. The Left hates everything America stands for. What is going on in today's classrooms goes beyond propaganda; and, it's more than condoning fraud and verbal assaults: it's harassment, intimidation and indoctrination. Climatism represents a danger to the safety of all in society who refuse to adopt the anti-capitalism and anti-Americanism embodied in the global warming credo of those on Left who have genuine socio-political and ideological interests at stake in the acceptance of global warming that have nothing to do with an average climate of the globe. Your table has a couple of screws loose. We are looking at an example of cause and effect. We don't need the scientific method to know that fear of global warming is a Left vs. right issue. Do we, however, just write off the relationship of party affiliation and political ideology as simply enigmatic, and fail to see the real causes underlying attacks on America from jihadists to climatists? A copy of a Sept. 9, 2014 letter from 15 Republican governors concerning the EPA HERE, provides –e.g., as follows: "Given your Administration's opposition to make use of the Yucca Mountain repository, will you bring forward a viable, long-term solution for [nuclear waste] disposal that would win public support and the necessary votes in Congress? … If not, does your Administration expect the states with bans on new nuclear facilities [California, Connecticut, Illinois, Kentucky, Maine, New Jersey, Oregon, West Virginia, and Wisconsin] to revise their laws, despite the federal government's failure to adequately address the waste issue?" (See, ibid.) Boy — you Ideologues have a short and selective memory. Why wasn't Yucca Mtn. resolved when Republicans were in power (Federal executive and legislative)? Also, the first nuclear plant being build in decades is coming under Obama's watch (Georgia Power's Vogtle units) — which is being done under your so called liberal/socialism agenda of the DOE loan guarantee program (which has funded in equal parts of solar, nuclear, and automotive). And anybody can file law suits — what is the track record in overturning the EPA in Court challenges? You must be using blue-state logic. Has the federal government conducted an analysis to determine the environmental impact of building renewable energy systems at the scale envisioned in the proposal? For example, one nuclear plant producing 1,800 MWs of electricity occupies about 1,100 acres, while wind turbines producing the same amount of electricity would require hundreds of thousands of acres. If such an analysis exists, please provide detailed information related to that analysis. If such an analysis does not exist, please explain why the analysis was not performed. (Ibid.) Calling the EPA administrators conservative is disingenuous. At least 3 if not all 4 are from the RINO wing of the Republican party which isn't very conservative. They picked administrators that penned a joint opinion piece. Robert Fri Russell E. Train Steve Jellinek Walter Barber, Jr. Anne M. Gorsuch Marianne Lamont Horinko Michael Leavitt and Stephen L. Johnson, didn't testify. So… about 33% of Republican EPA administrators and none of the last 3. PA — OK, I'll bite on this. Please provide us with links where each of the Republican EPA Administrators you cited have come out in criticism of regulating greenhouse gases. Seems like Congressional Republicans would have had them testify to counter Ruckleshaus, Reily, Thomas, and Whitman. From what you presented, one can not conclude anything on their beliefs — silence doesn't mean agree or disagree. http://leavittcenter.org/2013/10/22/global-warming-fact-fiction-or-both/ I would say Michael O. Leavitt is skeptical. http://conservefewell.org/author/marianne-horinko/ Marianne Lamont Horinko is pro-fracking and doesn't mention AGW at all (she IS interested in REAL pollution). Stephen L. Johnson got flamed for denying California stricter emissions standards. He seems to be keeping a low profile. From what I can tell it was just RINOs Wagathon — Here's your problem (and others) in being an Ideologue — refusing to even acknowledge any possible validity in another point of view. Lets take policy completely out of the dialogue — asking you a question: "Do you think Nobel Prize winner Dr. Molina is just NUTS and IRRATIONAL in his views? http://theenergycollective.com/davidhone/60610/back-basics-climate-science Sure sounds like you (and others) do. All your rantings are not about pro/con dialogue about a specific aspect of science — everything you post is about labeling anyone who disagrees with you (like Dr. Molina) as crazy liberal socialists. A subset has frightened and foraged themselves into a quandary. So fearful of imaginary consequences, they've panicked themselves into an impossible corner, global and severe autocracy. A madness, I'm sorry. Kim — Its a madness from the Ideologues on both sides. In who is worst — I'd have to give the edge to the CAGW crowd (personally Oppenheimer drives me crazy). But, on the flip side, politicians like Inhofe are pretty close. kim | October 6, 2014 at 12:08 pm | Bread turned into roses in his basket. The problem Dr. Molina's viewpoint has – is only 1/3 of energy leaves the surface via radiation. About 1/6 of the energy is removed by convection and about 1/2 (more at the equator) is removed by evaporation. You can dance about and scream radiation laws all you want – but that isn't the primary way energy leaves the surface. In fact any increase in "back radiation" whatever that is, will face negative feedback because convection and evaporation will increase. At the equator tiny changes make a big difference – 5°C increase from a 30°C base temperature increases evaporation about 35%. Lets assume the ocean at the equator is evaporating 90W/m2 of energy and 20W of CO2 forcing is applied giving us 5°C higher temperatures (in theory). 90*.35 = 31.5 W/m2 of evaporative energy loss. I'm not including the increase in radiation or the increase in convection. About 1/4 of the surface energy from the temperature increase will leak out as radiation since about 1/4 of radiation leaks out directly anyway or about 5W. We'll make a crude estimate of 2.5 W/m2 increased convective heat loss as a placeholder. So if we subtract 31.5 + 5 + 2.5 from .20W, 20 W of CO2 forcing leaves the surface about 19 W/m2 cooler.than it started. Since this is nonsense and the feeback is proportional more or less to the temperature increase, it means there is about 50% negative feedback and the surface will warm about 2.5°C Well — Dr. Molina (a Nobel Prize winning scientist) would disagree with you. But again, its fine to disagree — what's not fine is to label scientists who agree with Dr. Molina as being driven by a left wing socialist agenda. That's the problem. I didn't know if it was Dr. Maria C. Molina, Mario J. Molina, or one of a couple of dentists… I'm going with Dr. Mario J. Molina, he's a chemist. Seems to be an Ozone guy. Seems to have gotten a lot of mileage from the environmental movement for pushing the CFC ban. If that is the guy, he seems kind of like Hansen who is pretty sincere. Sincerity doesn't alway mean you're right though. 'A vigorous spectrum of interdecadal internal variability presents numerous challenges to our current understanding of the climate. First, it suggests that climate models in general still have difficulty reproducing the magnitude and spatiotemporal patterns of internal variability necessary to capture the observed character of the 20th century climate trajectory. Presumably, this is due primarily to deficiencies in ocean dynamics. Moving toward higher resolution, eddy resolving oceanic models should help reduce this deficiency. Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcings (13). Viewed in this light, the lack of modeled compared to observed interdecadal variability (Fig. 2B) may indicate that current models underestimate climate sensitivity. Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.' http://www.pnas.org/content/106/38/16120.full S-B can't be used to calculate the surface temperature of the Earth – but merely an effective radiating temperature. This is compared to surface temp measurements to give a mooted 'greenhouse effect'. The essential problem arises from the failure to move beyond 100 year old physics to the true complexity of climate in ways and for reasons that undermine real prospects for mitigation and adaptation. This arises from cognitive dissonance – a psychopathology linked to groupthink – which in this case almost universally derives from a progressive mindset. Meanwhile, the Left is fighting what it feels is wrong with wrong-headed socio-economic policies, based on climate science that is demonstrably wrong, all while ignoring wrongs in the world that we could do something about if not driven over an economic cliff. The insanity of the Left's plans have not gone unnoticed, as follows: Given the amount of land required by renewable energy systems, has your Administration considered that federal land permitting requirements may preclude or stall the development of renewable projects? Also, expanding the deployment of wind and solar farms could readily conflict with the Endangered Species Act (ESA). Indeed, one can easily envision the plausible scenario whereby the ESA, operating as federal law separate from the CAA, could prevent state compliance with EPA's emissions targets. How does your Administration propose to avoid these conflicts? (Ibid.) Wagaton — Your last post illustrates the 2nd problem I have with your posts — You are a "Hog Blogger". When CE has sometimes +600 posts, you made it very difficult to scan, getting through your tremendous amount of rantings. Are you so egotistical to believe that someone couldn't post equally as long or numerous posts refuting your rantings? Cutting and pasting is pretty easy. Do you want a War, forcing Dr. Curry to moderate? If you have something important to say then say it — but the sheer volume and number of your postings, Geeez. Christian Schlüchter recognizes the real problem today: "many scientist are servants of politicians and are not concerned with knowledge and data." And, as in the 1975 article above, Schlüchter also wonders about whether today's, "complex and spoiled society" may face circumstances that, "brought the Roman Empire to collapse." …you don't need to be a trained climatologist to smell danger when someone says, Anthropogenic greenhouse gasses are warming the planet, so we need to ramp up taxes, institute a command-and-control economy, stop industrial development in the developing world, and, y'know, just maybe, suspend democracy and jail people who object… If Greens were simply raising money to support research into clean energy and carbon capture and the rest of it, there would be no problem and no objections. If they were to simply try to fix the problem, instead of trying to bully the rest of the world, if they were donating 100 million to solar panel research rather than pissing it down the drain of elections and 'awareness raising,' then there would be no problem… ~Prussian (What is Mann that thou art mindful of him?) Stephen, if you have something important to say then say it — but the sheer volume and number of your postings, Jeeez. David L. Hagen | October 6, 2014 at 12:18 pm | Climate Policy Implications of the Hiatus in Global Warming Ross McKitrick, Fraser Institute . . .In a low-sensitivity model, GHG emissions lead only to minor changes in temperature, so the socioeconomic costs associated with the emissions are minimal. In a high-sensitivity model, large temperature changes would occur, so marginal economic damages of CO2 emissions are larger. . . . warming has actually slowed down to a pace well below most model projections. Depending on the data set used, there has been no statistically significant temperature change for the past 15 to 20 years. . . . One implication of these points is that, since climate policies operate over such a long time frame, during which it is virtually certain that important new information will emerge, it is essential to build into the policy framework clear feedback mechanisms that connect new data about climate sensitivity to the stringency of the emissions control policy. A second implication is that, since important new information about climate sensitivity is expected within a few years, there is value to waiting for this information before making any irreversible climate policy commitments, in order to avoid making costly decisions that are revealed a short time later to have been unnecessary. This has relevance to the (in my view wrong) argument for applying lower discount rates to very long term assessments of costs and benefits on the grounds of intergenerational equity, whatever that might be. McKitrick's evidence indicates that Climate Sensitivity to CO2 is much lower than IPCC's models. Consequently, the benefits of rising CO2 will extend much longer beyond 2070. That also indicates that the projected harm will likely be much lower, later, – and more uncertain. Dan Hughes | October 6, 2014 at 1:41 pm | These papers are free-access available online until January 2015 at http://www.annualreviews.org/toc/statistics/1/1: Statistics and Climate, Peter Guttorp, Department of Statistics, University of Washington, Seattle, Washington 98195 For a statistician, climate is the distribution of weather and other variables that are part of the climate system. This distribution changes over time. This review considers some aspects of climate data, climate model assessment, and uncertainty estimation pertinent to climate issues, focusing mainly on temperatures. Some interesting methodological needs that arise from these issues are also considered. First paragraph of Introduction: This review contains a statistician's take on some issues in climate research. The point of view is that of a statistician versed in multidisciplinary research; the review itself is not multidisciplinary. In other words, this review could not reasonably be expected to be publishable in a climate journal. Instead, it contains a point of view on research problems dealing with some climate issues, problems amenable to sophisticated statistical methods and ways of thinking. Often such methods are not current practice in climate science, so great opportunities exist for interested statisticians. Climate Simulators and Climate Projections, Jonathan Rougier and Michael Goldstein Department of Mathematics, University of Bristol, Bristol, BS8 1TW, United Kingdom; Department of Mathematical Sciences, University of Durham, Durham, DH1 3LE We provide a statistical interpretation of current practice in climate modeling. In this review, we define weather and climate, clarify the relationship between simulator output and simulator climate, distinguish between a climate simulator and a statistical climate model, provide a statistical interpretation of the ubiquitous practice of anomaly correction along with a substantial generalization (the best-parameter approach), and interpret simulator/data comparisons as posterior predictive checking, including a simple adjustment to allow for double counting. We also discuss statistical approaches to simulator tuning, assessing parametric uncertainty, and responding to unrealistic outputs. We finish with a more general discussion of larger themes. Our purpose in this review is to interpret current practice in climate modeling in the light of statistical inferences about past and future weather. In this way, we hope to emphasize the common ground between our two communities and to clarify climate modeling practices that may not, at first sight, seem particularly statistical. From this starting point, we can then suggest some relatively simple enhancements and identify some larger issues. Naturally, we have had to simplify many practices in climate modeling, but not—we hope—to the extent of making them unrecognizable. Climate: your distribution of weather, represented as a multivariate spatiotemporal process (inherently subjective) Weather: measurable aspects of the ambient atmosphere, notably temperature, precipitation, and wind speed. Peter Lang | October 7, 2014 at 12:57 am | Dan Hughes, Interesting but … I want to know how they address these issues: 1. climate changes abruptly, not as long continuous curves as the IAMs assume 2. If not for human's GHG emissions, the next abrupt change would probably be colder not warmer (e.g back toward Little Ice Age temperatures) (because we are passed the peak of the current interglacial period). 3. Warming and increasing CO2 concentrations have been net beneficial for the past 200 years. 4. Why should we expect that this trend will not continue with more CO2 emissions for some considerable time to come? 5. What is an unbiased pdf of the impacts of continuing GHG emissions? Science rules, ideology drools! The September sea-ice minimum is , which matches the median scientific prediction (in June) of within 6%. Good on `yah sea-ice scientists! The June WUWT prediction of was a high-side outlier. Boo on `yah, ideologists! a fan of *MORE* discourse: The September sea-ice minimum is 5.02\times 10^{6}\ \text{km}^2, which matches the median scientific prediction (in June) of 4.7\times 10^{6}\ \text{km}^2 within 6%. Good on `yah sea-ice scientists! That is the *Arctic* sea ice minimum. Globally, the sea-ice extent is close to its average over the last 3+ decades: http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg The dip near 15 microns in TOA spectrum demonstrates that thermalization of terrestrial radiation energy absorbed by CO2 molecules exists. The delay would make no difference in the intensity. All of the surface radiation would eventually make it to TOA. The tiny increase in absorption lines due to a 100 ppmv CO2 increase (water vapor has 465 absorption lines in the range 5-13 microns compared to 1 at 15 microns for CO2), about 1 in 100,000, is insignificant. Reverse-thermalization to CO2 molecules at high altitude and S-B radiation from clouds provides the observed comparatively low TOA radiation at 15 microns. Pointman probes: Tell Me Why Don't they know that switching from growing food staples to growing biofuel crops for cars only the rich can afford has more than doubled prices of basic foods? Don't they know about the people killed in the food riots? Do they actually know anything? Do they care anyway? . . . I highly recommend this. David L. Hagen "Don't they know that switching from growing food staples to growing biofuel crops for cars only the rich can afford has more than doubled prices of basic foods? Don't they know about the people killed in the food riots? Do they actually know anything? Do they care anyway? . . ." I've been trying to make this point for a while and the answer seems to be: Apparently not. It really looks like they don't give a damn. Malaria 8X worse than Global Warming "Malaria threatens more than 40% of the world's population and kills up to 1.2 million people worldwide each year. Many of these deaths happen in Sub-Saharan Africa in children under the age of five and pregnant woman. The estimates for clinical infection is somewhere between 300 to 500 million people each year, worldwide." Thus Malaria kills about 8X more people than ""climate change" aka "majority anthroprogenic global warming". Lets focus priorities on where it matters. http://www.rdmag.com/articles/2014/09/taking-big-bite-out-malaria?et_cid=4191706&et_rid=219918439&type=headline Do you have anything to say about this EPA note on discount rates? http://www.rff.org/Publications/Resources/Pages/183-Benefits-and-Costs-in-Intergenerational-Context.aspx And the Socialist's Cost of Carbon? Taken on notice. Two points come to mind: 1. the 12 economists who attended the EPA workshop on discount rates are all from inside the climate change orthodoxy. I don't see any well know conservative economists represented. McKitrick is not on the list, nor your friend who critiqued the Stern review. Why not? Nordhaus and Tol are both included as are some of the extreme alarmists economists. I've noticed that Nordhaus has become progressively more alarmist since 2007. I get the impression he has been influenced by the continually badgering and criticising by the climate alarmists and he is no longer as objective as he once was. 2. I don't understand how economists can argue to use discount rates that are equivalent to and less than the long run average risk free rate of return if we recognise that the decision to mitigate GHG emissions is far from risk free. It seems to me that mitigating GHG emissions could be beneficial or it could be damaging. How can it be concluded that reducing GHG emissions is risk free? Faustino, I want to change the wording on point 2 2. I don't understand how economists can argue to use discount rates that are equivalent to and less than the long-run average risk free rate of return if we recognise that the decision to mitigate GHG emissions is far from risk free. It seems to me that implementing policies which will forces huge investments to mitigate GHG emissions is a high risk strategy. As you have pointed out many times the best strategy is to build our capacity to deal with whatever problems occur and remain highly flexible (one of the best ways is to build wealth) . Forcing us to commit to high cost strategies that are hugely economically damaging for all this century, on the belief we are going to improve the lot of people centuries from now, is high risk. It seems to me the\ the discount rate should be that used for a high risk investment. Another risk is that we don't know if GHG emissions are net beneficial or net damaging. Another risk is it is very expensive to change strategy once it has been implemented. If the world implemented a carbon tax and later realised it won't succeed (as many people already realise), it would then be difficult and costly to stop it and implement a different policy. Australia provides an example of those difficulties now. Given all these investment risks how can it be concluded that policies requiring massive investment in GHG emissions reduction should be justified on the basis of a risk free discount rate? After their primaries, Republican senate candidates are becoming more moderate on climate change. For those outside the US, primaries are battles within their party to become the November candidate. Being moderate on climate change is a killer within the Republican Party, but having passed that hurdle they can pivot, and some are, to try to be competitive in the state race. It's all politics with them. http://www.huffingtonpost.com/2014/10/06/republicans-climate-change_n_5941866.html RickA | October 7, 2014 at 10:40 am | Jim D said "It's all politics with them." I am sure that eco politics (green politics) is also all politics with Democrats. Politicians are political creatures – and that includes most politicians from both major parties. As the Tea Party labels folks like me, I am a RINO (Republican in name only). Suffice to say, I don't like the Tea Party which is highly comprised of Ideologues. My opinion was formed through my brief volunteer work for Jon Huntsman (2012 Presidential campaign). Gov. Huntsman (who had a voting record a whole lot more conservative than McCain in 2008 or Romney in 2012) made his "chops in international trade. Huntsman had a lot of creative ideas in how to approach AGW using conservative principles — focusing on policies to achieve high economic growth of exporting U.S. high tech products (which we're good at) to developing countries . But he was stopped at the start gate by the Tea Party and literally booed off stage (and I was there) — labeled as an Al Gore clone. If Huntsman came into this CE blog, he'd get the same treatment. The money in that party is on the right wing, and money is everything in elections unfortunately. They have no tolerance for independent thinkers like Huntsman, that I even respected, and I am not at all Republican. Like the Supreme Court said, money is speech, with the corollary that if you have no money you have no speech when it comes to elections. Jeffn | October 6, 2014 at 9:12 pm | Speaking of politics, Democrats are campaigning on the promise that they won't do anything about global warming. And hoping folks like you and JimD assume they're just lying to get elected. http://freebeacon.com/politics/grimes-staffers-suggest-kentucky-dem-lies-about-coal-support/ Now, this must be puzzling to you: seeing as how the Republican position is so far out of the mainstream, why would Democrats think the only way to get elected is to adopt the Republican position? That's about local policy, not science. It's very different, although blurred for some. I expect he has not denied the science that humans are causing global warming. Jeffn | October 7, 2014 at 6:59 am | Local policy? There is no local policy on coal in the US Senate that Grimes is running for. She's promising she will oppose federal policy that would reduce the use of coal. And hoping that you won't think she's anti-science, just anti-truth. Why would she do this? You aren't suggesting AGW policy would be economically damaging, are you? "I don't like the Tea Party which is highly comprised of Ideologues…." I don't like ideologues, meaning those holding opinions contrary to my own ideology. aaron | October 7, 2014 at 10:47 am | The problem with ideologues is that they are not at all open to good ideas, like… how about letting the scientific method work? I'm basically a Libertarian. Living the DC area – it is pretty obvious that government is the problem and neither party seems horribly interested in downsizing it. I'm fine with an politician who wants to downsize the non-constitutionally-mandated parts of government. Huntsman is sort of moderate. I could see where a more liberal Republican would like him. Some of his positions give me heartburn but he is vastly better than a failed community organizer. Most people that get involved in politics are ideologues of some flavor. @Stephen Segrest | October 6, 2014 at 8:52 am | SS – obstructionism is sometimes entirely appropriate. If one can prevent the implementation of destructive policy, then that is a win. I would go so far to suggest that when we have a scenario when the science is suspect and some of the suggested policy is highly destructive and when the entire subject is highly political – then tribalism is an appropriate tactic. Tribalism arises naturally in social groups. Given that fact, I surmise it probably serves a good purpose. So, I guess you can see I don't cotton to this idea of conservatives crafting a "conservative way" to achieve the goals of the lefties. Kapish? Jim2, I start from a known fact: Nobody on the face of this Earth knows how the science of AGW will eventually play out. How it plays out in science is not conservative or liberal. How the Earth will respond in an incredibly complex climate system will determine this. By Conservatives not being objective that we just don't know and opposing almost any AGW policy actions, there is just a great big void — where almost all proposed policy actions are liberal ideology approaches (i.e., carbon tax, cap and trade, etc.). Jon Huntsman was trying to change this paradigm through something he made his chops on — International Trade. It was kinda a "China in your face approach" with other developing economies/countries. Paraphrasing Huntsman (as best I can), he was saying "In international trade, reciprocity reigns supreme. No country eliminates/lowers/gives incentives to its trade barriers without reciprocal and meaningful concessions from trading partners". In Huntsman approach, the U.S. would give free market developing countries special trade status for selected products into U.S. markets IF these developing countries committed to building a low-carbon economy using U.S. high-tech products (which are often not the lowest capital cost option). An example could be a highly efficient coal power plant. But the Tea Party didn't want to listen to potential conservative approaches to AGW — he was labeled an Al Gore clone. If we get stuck with a carbon tax or cap & trade system, Republicans will have no one to blame but ourselves. Lets create some "Enterprise Zones" and see how this works. I'd start in the Philippines. According to a Pew International poll, the Philippines has the highest favorable rating of the U.S. within the World. Lets do business with folks who like us and we like them. To paraphrase: "I will only allow you to sell products to my voters if you buy these particular products from my cronies." 1. Curry has from what I can tell bounded the CO2 doubling to about 1.6°C. 2. The current linear CO2 growth in the face of geometric emissions growth means it will hard to even acheive 600 PPM. 3. For the plant growth/water conservation effect of 600 PPM we want to hit 600 PPM. 4. The AGW crowd has not proved positive feedback. They haven't disproved the null hypothesis that doubling CO2 causes a 1°C or less temperature change (the simple effect of the forcing alone). So I do propose a policy: 1. Criminalize (felony) burning food for fuel. 2. Criminalize (felony) sequestering CO2. 3. Criminalize (felony) executive branch officials restricting CO2 emissions in anyway. 4. Allow liberal allowance for arctic methane exploration. AGW keep using this as a threat – it is time to pull the teeth of the arctic methane threat. See – a counter policy as requested to deal with CO2. I'm not willing to starve future people because of low CO2 plant growth just to be AGW stylish. Aaron — Yea, just like Obama funded Georgia Power billions of dollars to build the first nuclear power plant in decades in the solid Blue State bastion of Liberals in Georgia to get them to continue voting solid DEM. So, you want to perpetuate the model of driving up costs to the point that they are financially uneconomical and then subsidize them when supply becomes dangerously low… Peter Lang — Peter, I can not critique your paper in like 5 minutes. I said at first blush, it appears that you are not looking at the load shape increment supply side decisions in generation planning. Stepehen Segrest, Peter Lang — Peter, I can not critique your paper in like 5 minutes. I agree. You shouldn't have made any comments at all. You should not have misrepresented the analysis since you haven't read it or haven't understood it. You should not have made disingenuous, misleading, unsupported, baseless, wrong, statements about the paper. To do so is a clear indication of intellectual dishonesty. And if you demonstrate it once, it is likely you do it continually. That seems to be the case as evidenced by previous your previous comments on this thread I've replied to. I said at first blush, it appears that you are not looking at the load shape increment supply side decisions in generation planning. Which just shows you haven't read or haven'r understood the paper. Take you rime. read the papers. Read the comments on the first paper "http://bravenewclimate.files.wordpress.com/2012/02/lang_renewable_energy_australia_cost.pdf" and read the comments here: http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/ Take your time. Consider it and then come back with questions, not baseless assertions. Jim2, The claim that Republicans obstruct AGW policies is a myth that partisans like Segrest use. The GOP does not stand in the way of increased use of natural gas (it, in fact, supports it) and does not stand in the way of nuclear power. The fact that the GOP supports the only alternatives to coal that both reduce CO2 emissions and provide power cost-effectively is telling. No Republican would opposed the construction of cost-effective wind or solar by private energy firms. The only things the GOP "obstructs": ineffective, expensive policies the left wants for no other reason than partisanship. May that long continue! JustinWonder | October 7, 2014 at 10:29 am | The GOP obstructs the pilfering of the public treasury by people that want to reward their political friends. I want more nat. gas and nuclear power to generate electricity. If someone wants to put a PV panel on their roof they can go ahead and do it, but they need to pay for it themselves – no tax credits, no utility buy backs at max prices, no tax credits for $80,000 electric cars. Justin Wonder — You need to get outside of the echo chamber once in a while. This is not the one-side picture you paint. I live in Florida and Duke Energy is billing us $4.5 billion for a botched effort to repair nuclear units (which they couldn't repair and are shutting the unit down) and nuclear units under construction which they now have cancelled. The tax credit on wind energy is basically the same as for new nuclear units — but nuclear has two additional goodies: Price-Anderson; a federal guarantee to electric utilities to cap the construction costs of new nuclear units. In what you probably describe as cronyism socialism — the U.S. DOE loan guarantee program has been evenly split (1/3, 1/3, 1/3) between nuclear, transportation, and solar. I seriously question whether you know what an integrated electricity grid is and what base, intermediate, and peaking load means. If you did, you'd know that solar has beaten the costs of fossil fuel options for peaking load for decades. I could go on and on countering your cherry-picking with cherry-picking of my own. 1. Current solar panels are $1/W (I just priced them).; 2. 2000 Megawatts of solar (8000 MW nominal) is $8,000,000,000 dollars. This is more than a nuclear plant. Comparison for 2019 power generation. 3. Solar is 60% more expensive than a 4th gen nuclear power plant. 4. The solar plant is only good for about 25 years (power degraded 50%). 5. Nuclear doesn't need an expensive backup plan (it is dispatchable). 6. Gas and conventional coal are cheaper than Solar.- and are dispatchable. The EIA lists solar as "nondispatchable" so the cost of the backup plan is not included. And these are 2019 costs. So the statement that any installed solar is "cheaper than" basically anything in terms of Total System Levelized Cost of Energy is simply a lie. Stephen Segrest | October 7, 2014 at 12:38 pm | PA — Like Justin Wonder, it looks like you don't understand the basics of electricity engineering economics either (integrated grid, dispatching base, intermediate, peaking load). Go read up on this and show some good faith in demonstrating knowledge on this and I'll have dialogue with you. Hint: The key is cost per kWh hugely driven by capacity factors. JeffN | October 7, 2014 at 12:39 pm | Echo chamber? http://www.carolinajournal.com/articles/display_story.html?id=11434 Look, when you're handing out a quarter-million dollar grants to Senator's spouses, it makes no sense to compare that favorably to any effort to support nuclear power. Republicans favor the latter, for some reason climate campaigners favor the payola. "pilfering" is actually an understatement. Also, claiming your debating opponent must be ignorant of energy systems for failing to understand the awesome economic benefit of solar power is just plain wrong. There is nobody – not one single solitary person anywhere on this globe – preventing you or anyone else from building your cost-effective solar power. There is a group preventing the type of bogus, wasteful boondoggles we see in North Carolina, as well as all of Europe. JeffN — You continue to show your lack of basic understanding of the engineering and politics of U.S. electric utilities. Changing "net metering" laws has been and continues to be a long up-hill battle. Retail selling of electricity (other than by electric utilities) under franchise laws is flat illegal. The Duke Power situation is interesting. PEF and Duke power both planned nuclear plants at the same time. The PEF was basically a clone of the Duke plant and had strings attached by Crist that they had to shutdown coal fired plants. Given the $3 billion cost of running power lines to the site, the Duke plant was cheaper. Duke bought PEF. Given that the PEF due to plant siting/strings was more expensive – Duke shuttered the plant. The Crystal River plant was gross incompetence by PEF and not Duke's fault. So to some extent environmentalism killed the Levy plant (the coal plants can stay open). Stephen Segrest: If you did, you'd know that solar has beaten the costs of fossil fuel options for peaking load for decades. That depends on time of day of the peak. When peak demand comes later than peak solar output, as in California solar power requires backup: http://www.caiso.com/Pages/TodaysOutlook.aspx#SupplyandDemand; http://content.caiso.com/green/renewrpt/DailyRenewablesWatch.pdf note: California has gotten as much as 16% of total electricity from renewables, but yesterday it was only 10% because we have had consistently below average wind. If the backup is included in the cost of the PV installation, then PV loses its price competitiveness. PA quoted a price of $1 per watt for PV. Where I live, the cost of an installation is about $10,000 for 2kw, or $5 per watt for a roof-mounted system; large installations run at about 80% of that. The materials that PV panels are made from are considered toxic waste; per MW-hr of electricity produce, PV panels produce more toxic waste than nuclear power plants. Nuclear has a PR disadvantage because people would rather get cancer and birth defects from PV waste (and high altitude exposure to ionizing radiation) than from nuclear waste, so they don't mind the larger total exposure from other sources than nuclear. The cost of government subsidies for power is much lower for nuclear when you compute the cost per GW-hr of electricity actually produced. The US has had reliable electricity from 100+ nuclear power plants for decades. If you add in 5% or so as the bill for the failures of TMI and San Onofre, the cost is still much lower than the cost of electricity from solar. Even if you think that PA and Justin Wonder are arguing from ignorance and bad faith, remember that a lot of readers will appreciate any reliable information that you can provide them. Stephen Segrest: I could go on and on countering your cherry-picking with cherry-picking of my own. Sure, but that is not the only alternative. You could aim for an overall evaluation of all costs and benefits. Another detail about pricing. Costs of manufacturing PV panels are decreasing (perhaps 10% per year over long time spans, with short-term spurts much greater), but the costs of installation are declining more slowly. Once they reach price parity on a wide scale, bidding and the scarcity of the resources will probably act to retard the rate of price decline. Another cost besides installation is O&M. http://www.scottmadden.com/insight/407/solar-photovoltaic-plant-operating-and-maintenance-costs.html Fixed panels (less efficient) are $50/kW-y. Tracking panels are $60/kW-y. The EIA lists 0 (zero) variable O&M cost and that isn't correct. Matthew Marler — Of course you are correct that solar doesn't always beat the cost of peaking fossil fuel alternatives. My wording should have been better — that in many applications solar can be the least cost economic dispatch option. I've been home sick with the flu since last Friday. I took this opportunity to ask Wagathon (and others like him) to stop all this liberal rating day after day, week after week on daily posts. I primarily come to CE to try and learn some science — and with sometimes +600 posts, scrolling through/following the blog becomes really difficult because of the sheer number and volume of Wagathon's repeated liberal rantings. If anybody wants to rant about anything — they should do it in "Week in Review" as Dr. Curry asked all of us to do — and stay "on point" with her other blog posts. D o u g C o t t o n | October 7, 2014 at 6:21 pm | If you'd like to learn what is by far the most relevant science in the climate debate I suggest this comment Stephen Segrest, Justin Wonder — You need to get outside of the echo chamber once in a while. This is not the one-side picture you paint. I live in Florida and Duke Energy is billing us $4.5 billion for a botched effort to repair nuclear units Your first comment on this sub-thread demonstrates YOU are the cherry picker, Your comment is disingenuous. It shows you have little understanding of what you are talking about. And YOU should take your own advice "You need to get outside of YOUR echo chamber once in a while." You quite meaningless figures like $4.5 billion repair and don't put that in perspective of energy supplied or to be supplied over remaining life in $/MWh. Until you tell us what the $/MWh cost is, it's totally meaningless. You should also get out of your echo chamber and learn about the cost impediments that have been added to nuclear power over the past decades as a result of 50 years of misleading, disingenuous, mostly dishonest anti-nuke propaganda by those who call themselves "Progressives" (what a joke). Stephen Segrest Once again, this comment applies to YOU, not P.A.. You demonstrate clearly it is you that doesn't "understand the basics of electricity engineering economics" You blurt out a pile of words and motherhood statements but are clearly unable to actually apply them yourself. (integrated grid, dispatching base, intermediate, peaking load). " Here is a simple of capital costs, cost of electricity and CO2 abatement cost for a mostly renewables versus mostly nuclear powered Australian National Electricity Market grid. The costs of the additional grid requirements are included in the estimates. The costs included the estimated costs in Australia with Australia's WACC, labour productivity and labour rates. http://oznucforum.customer.netspace.net.au/TP4PLang.pdf Happy to take your questions on this. if you want to know more about the renewables options, see this preceding paper: http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/ Download the pdf version to see the appendicies and footnotes. You can also download a simple spreadsheet and change the inputs. [REPOST to fix formatting] Peter Lang — As an engineer, you dog-gone know what I'm talking about with bell shape load curves. Your ubiquitous comment on solar (peaking) versus nuclear (base load) is highly inappropriate and just plain wrong. No electric utility designs their portfolio of power plants with only base load units. If they did, costs per kWh would be out the roof because of low capacity factors. I'll read your paper and comment — but at first blush, it looks like you are comparing apples to oranges. And this from Peter Lang,' Renewables or Nuclear Electricity for Australia – the Costs.' (April 2012.) Peter Lang has undertaken comparative studies of four renewable energy scenarios with nuclear energy. The nuclear scenario is roughly 1/3 the capital cost, less than 1/2 cost of electricity and less than1/3 CO2 abatement cost of the other scenarios. (Figure 6.) https://www.google.com.au/?gws_rd=ssl#q=Figure+6+PeterLang+2012 As usual baseless, disingenuous, misleading and unsupported statements. but at first blush, it looks like you are comparing apples to oranges. What are you referring to. What are the apples and oranges? Be specific. make your comment clear so I an anyone else can understand what you mean and answer your critique. And be sure to show that your are not just making nit-picking irrelevant comments. Show that your criticism is significant and changes the conclusions. Your ubiquitous comment on solar (peaking) versus nuclear (base load) is highly inappropriate and just plain wrong. No electric utility designs their portfolio of power plants with only base load units. What are you referring to that you say is "is highly inappropriate and just plain wrong". Quote the part and explain what it is inappropriate and just plain wrong? It seems you haven't understood the analysis. Do you understand the modelling that was done to match the generation to the demand curve at every half hour through the year 2010? I suggest it is you that just plain doesn't understand what you are talking about. Beth — What is the cost of solar versus say, a combustion turbine running on oil used for peaking in Australia ? This would be an example of an apples to apples comparison. "PA quoted a price of $1 per watt for PV. Where I live, the cost of an installation is about $10,000 for 2kw, or $5 per watt for a roof-mounted system; large installations run at about 80% of that. " Well, the raw panel cost is about $300 for a 280W panel basically $1/W. http://www.nrel.gov/docs/fy12osti/53347.pdf 2010 Cost $3.80/WP DC – 187.5 MWP DC fixed-axis utility-scale ground mount $4.40/WP DC – 187.5 MWP DC one-axis utility-scale ground mount. 2020 Evolutionary cost vs Sunshot program target: $1.71/ WP DC – 187.5 MWP DC fixed-axis utility-scale ground mount (SunShot target: $1.00/WP DC) • $1.91/ WP DC – 187.5 MWP DC one-axis utility-scale ground mount (modified-SunShot target: $1.20/WP DC). http://www.solarpaneltilt.com There is a 20+% benefit to a tracker. So, a 2010 installation of utility solar just for the installed panels of the equivalent of a Westinghouse Electric Company AP1000 twin installation (2200 MW) at Cherokee River, given there are only about 5.62 kw-h of average available solar. The installed cost for a tracker system assuming 100% available solar is $41 billion for a nondispatchable system vs about $14 billion for the nukes (Duke says 6 but I'm a pessimist).. I haven't mentioned some other factors. http://rredc.nrel.gov/solar/calculators/PVWATTs/version1/change.html DC to AC Derate factors are .77 under good conditions. Further: Florida is clear (less than 30% cloudy) only 70% of the time.. I am dubious about the claim that any installed systems are competitive with conventional sources.. Now you are demonstrating simple mindedness and ignorance. not even the capacity to think logically. How can you think that "solar versus say, a combustion turbine running on oil used for peaking in Australia" Are comparable. The combustion turbine is fully dispatchable. It can be brought on line quickly and ramps quickly at any time of day and night, in any climate (even Antarctica in winter) has 98% availability and used as the emergency back up system for hospitals and military installations. How can you think that solar power can be comparable to that. You demonstrate a total lack of understanding of energy. I'd suggest you go to a blog site where your ignorance is not so obvious. Stephen Segrest – "I could go on and on countering your cherry-picking with cherry-picking …" You win, you are a much better cherry-picker than me. Why all the hostility? Why not state your case without rancor and be a gentleman about it? I'm actually interested in reading what you have to say. Stephen Segrest: My wording should have been better — that in many applications solar can be the least cost economic dispatch option. Schools, for example, which operate almost exclusively in daylight hours. Pumping water for agricultural purposes. And in some foreign nations where all sources of energy are intermittent. In CA, homeowners who spend a lot of money on air-conditioning can profitably install PV panels, though the more economical choice is to go without A/C. But large scale solar and wind farms? I don't see those being economical for a very long time. PA: I am dubious about the claim that any installed systems are competitive with conventional sources.. You and I are mostly in agreement. I have lost the link, but I read of a school near Phoenix, AZ, that covered over its parking lot and roof with PV panels, and reduced the cost of its electricity. But that was strictly a daytime operation and their need for A/C was proportional insolation, as was the electric power produced. Niche uses like that look promising in the US. Matthew R. Marler I suspect they'll never be economically viable for providing a significant proportion of world electricity supply. "The Catch-22 of Energy Storage" explains why http://bravenewclimate.com/2014/08/22/catch-22-of-energy-storage/ 'I suspect they'll never be economically viable for providing a significant proportion of world electricity supply. "The Catch-22 of Energy Storage" explains why' Peter – I believe you are wrong – and not for a good reason. The greenies have us headed for "grid control in reverse". The US grid has three parts: generation, transmission, consumption (load). Traditionally the vast majority of the control was via generation. Consumption (load) was what it was and generation was decreased/increased to match the load. Where the grid is headed is the reverse: the generation is variable and load shedding is used to match generation. If you can turn off enough refrigerators, air conditioners, and industrial users you don't need significant standby generation for green energy sources. It is dumb and expensive but then again it is a "green" idea so that is to be expected. Thanks you. I suspect your comments is tongue in cheek, right? Did you read John Morgan's post: "The Catch-22 of Energy Storage" I think you and other readers would find it interesting. It's getting a lot of publicity. It's been reproduced on many other web sites and generates lots of discussion. Where does the world's energy go? The 10 guzzlers http://www.cnbc.com/id/10000030 Rob Ellison | October 6, 2014 at 11:35 pm | 1000km long Rossby waves rolling in over the Gulf of Carpentaria. http://www.couriermail.com.au/news/queensland/morning-glory-cloud-weather-phenomenon-coming-to-queensland/story-fnkt21jb-1227081216729?nk=9b889cf54174a45811ba336f7dc29a08 Ocean heat content: http://www.clivar.org/sites/default/files/documents/gsop/DISCUSSION_II_LOEB.pdf From unimpeachable sources. Seems a fair amount of uncertainty to me. You should warn people it's a 5000 TB down load and takes a week on Australian NBN :) I am sorry. It's a 72 page pdf with many color graphics. Many of the big names were involved with the document. No need to apologise. My comment was intended as Aussie humour. I know that often doesn't arrive in US as sent from down under. :). It very interesting and I've already forwarded the link to some friends. Thanks for not calling attention to Uranus this time, dougie. We don't find that amusing. We are actually paying for this research. http://projectreporter.nih.gov/project_info_description.cfm?aid=8669524&icde=21946917 "Mounting evidence demonstrates that weight influences intimate (i.e., dating and sexual) relationship formation and sexual negotiations among adolescent girls. Obese girls consistently report having fewer dating and sexual experiences, but more sexual risk behaviors (i.e., condom nonuse) once they are sexually active." I am outraged by the sexism here! Why is our government not funding research into the statistically significant phenomenon of fat guys getting fewer dates with cheer leaders and super models? Does no one care about Al Gore? And where's the link to globalclimatewarmingchange? Energy Futures Price OIL 88.50 BRENT 91.75 NAT GAS 3.926 RBOB GAS 2.3458 Peter Lang — I did read through your paper, and started to compile at least an initial set of questions — then I saw your above post about my total lack of understanding and ignorance. I have degrees in engineering and economics — including work at the prestigious University of Chicago. I developed a leading U.S. Industry standard on engineering economics modelling for project evaluation (PROVAL) which is probably used in Australia. I've testified before the U.S. Congress several times. Depending on how you answer my first set of simple questions — I may or may not choose any further dialogue with you. (1) Did the Researchers at CEEM peer review or provide any type of critique on your paper? Has any professional organization provided any peer review? (2) Did you have access to, and run the CEEM load shape model in your analysis? Which CEEM? Peter said he used a study by the Centre for Energy and Environmental Markets (CEEM) — which I know nothing about — I assume its part of an University? Peter Lang | October 7, 2014 at 11:35 pm | Segrest, Your comments on this post have demonstrated you don't bother to read nor try to understand what the person you are responding to says. You've demonstrated that clearly. You've also demonstrated intellectual dishonesty http://judithcurry.com/2013/04/20/10-signs-of-intellectual-honesty/. You are not worth wasting the time on. A person who has the experience you claim, would not be making such comments. And they would read the paper and the references to find out the answer the questions you asked, without making a fool of yourself. Seems like a simple yes/no to: 1. Did CEEM or any professional organization peer review your work? Yes or No. 2. Did you run the CEEM load shape model in your analysis? Yes or No. (you just can't do what you tried to do without running a load shape model). Same answer, twit. Read it, like any professional or trained researcher would do.
CommonCrawl
Measuring the reproducibility and quality of Hi-C data Galip Gürkan Yardımcı1, Hakan Ozadam2, Michael E. G. Sauria3, Oana Ursu4, Koon-Kiu Yan5, Tao Yang6, Arya Kaul7, Bryan R. Lajoie2, Fan Song6, Ye Zhan8, Ferhat Ay7, Mark Gerstein9, Anshul Kundaje4,10, Qunhua Li11, James Taylor3,14, Feng Yue6,12, Job Dekker2,8,13 & William S. Noble ORCID: orcid.org/0000-0001-7283-47151 Genome Biology volume 20, Article number: 57 (2019) Cite this article Hi-C is currently the most widely used assay to investigate the 3D organization of the genome and to study its role in gene regulation, DNA replication, and disease. However, Hi-C experiments are costly to perform and involve multiple complex experimental steps; thus, accurate methods for measuring the quality and reproducibility of Hi-C data are essential to determine whether the output should be used further in a study. Using real and simulated data, we profile the performance of several recently proposed methods for assessing reproducibility of population Hi-C data, including HiCRep, GenomeDISCO, HiC-Spector, and QuASAR-Rep. By explicitly controlling noise and sparsity through simulations, we demonstrate the deficiencies of performing simple correlation analysis on pairs of matrices, and we show that methods developed specifically for Hi-C data produce better measures of reproducibility. We also show how to use established measures, such as the ratio of intra- to interchromosomal interactions, and novel ones, such as QuASAR-QC, to identify low-quality experiments. In this work, we assess reproducibility and quality measures by varying sequencing depth, resolution and noise levels in Hi-C data from 13 cell lines, with two biological replicates each, as well as 176 simulated matrices. Through this extensive validation and benchmarking of Hi-C data, we describe best practices for reproducibility and quality assessment of Hi-C experiments. We make all software publicly available at http://github.com/kundajelab/3DChromatin_ReplicateQC to facilitate adoption in the community. The Hi-C assay couples chromosome conformation capture (3C) with next-generation sequencing, making it possible to profile the three-dimensional structure of chromatin in a genome-wide fashion [1]. Recently, application of the Hi-C assay has allowed researchers to profile the 3D genome during important biological processes such as cellular differentiation [2, 3], X inactivation [4,5,6], and cell division [7] and to identify hallmarks of 3D organization of chromatin, such as compartments [1], topologically associating domains (TADs) [8,9,10], and DNA loops [11]. Because the Hi-C assay measures the 3D conformation of a genome in the form of pairs of mapped reads ("interactions") connecting different loci, many such pairs are required to adequately characterize all pairwise interactions across a complete genome [11,12,13]. Consequently, the Hi-C assay can be costly to run. It is thus essential to have accurate and robust methods to evaluate the quality and reproducibility of Hi-C experiments, both to ensure the validity of scientific conclusions drawn from the data and to indicate when an experiment should be repeated or sequenced more deeply. Reproducibility measures are also important for deciding whether two replicates can be pooled, a strategy that is frequently used to obtain a large number of Hi-C interactions [11]. A rich collection of literature for assessing the quality and reproducibility of a large collection of next-generation sequencing-based genomics assays, such as ChIP-seq [14] and DNase-seq [15], has been compiled over the past decade [16,17,18]. For these assays, enrichment of signal ("peaks") at loci of interest [19] and assay-specific properties of sequencing fragments have been used as indicators of the quality of an experiment [16]. Correlation coefficient [20,21,22] and statistical methods such as the irreproducible discovery rate (IDR) [17] have been used to measure the reproducibility of such assays. However, all of these methods are designed to operate on data that is laid out in one dimension along the genome. Furthermore, unlike other functional genomics assays, Hi-C data must be analyzed at an effective resolution determined by the user [13, 23, 24]. For these reasons, existing methods for assessing genomic data quality and reproducibility are not directly applicable to Hi-C data. A variety of methods have been used previously to measure the quality and reproducibility of Hi-C experiments. Ad hoc measures include using, for reproducibility, the Pearson or Spearman correlation coefficient [2, 25,26,27] and, for data quality, statistics that describe the properties of Hi-C fragment pairs [1, 28]. The drawbacks of using correlation as a reproducibility measure for genomics experiments, both because of its susceptibility to outliers and because it implicitly treats all elements of the Hi-C matrix as independent measurements, have been documented [16, 29]. In practice, because most of the Hi-C signal arises from interactions between loci less than 1 Mb apart [23, 24], the correlation coefficient will be dominated by these short-range interactions. To alleviate such problems, distance-based stratification [30] and dimensionality reduction of Hi-C signal [31], prior to measuring the correlation, have been proposed. Conversely, simple mapping statistics may be used to indicate a high or low percent of invalid or artefactual Hi-C fragments [24, 32], but such statistics reflect only the mapping stage of the analysis and cannot be immediately combined into a robust quality score. To overcome these problems, members of the ENCODE Consortium have recently developed methods for assessing both the quality and the reproducibility of the Hi-C assay [33,34,35,36]. In this study, we used large sets of real and simulated Hi-C data to assess and compare the performance of methods for measuring the reproducibility of Hi-C data and evaluating Hi-C data quality. We generated multiple benchmarks for testing the performance of reproducibility measures and established that all of these methods can accurately measure the reproducibility of Hi-C data, whereas correlation coefficient cannot. Similarly, we have used real and simulated datasets to profile the performance of quality control methods and compared these methods to established statistics that have been used as indicators of high-quality Hi-C experiments. Here, we offer a thorough assessment of quality control and reproducibility methods and describe best practices for analyzing the quality and reproducibility of Hi-C data. Experimental and simulated Hi-C datasets for performance evaluation We performed two replicate Hi-C experiments on cells from 13 immortalized human cancer cell lines from a variety of tissues and lineages using HindIII and DpnII restriction enzyme digestion (Additional file 1: Table S1). After aligning and filtering of paired end sequencing reads, we obtain 10 to 61 million paired reads per experiment for 11 cell types (generated using HinDIII) and more than 400 million paired reads for the remaining two deeply sequenced cell types (generated using DpnII). These Hi-C interactions serve as a readout of three-dimensional proximity of the corresponding genomic loci. The interactions are binned into fixed-sized bins, and a count of the number of Hi-C interactions that connect each pair of bins is stored in a Hi-C contact matrix. Unless otherwise noted, we used 40-kb bins because this value achieves reasonable sparsity of the Hi-C contact matrices, based on the depth of sequencing of the datasets used in our study. Also, this resolution has been adopted in multiple previous studies [7, 8]. We use the resulting Hi-C matrices as input to every reproducibility and quality control analysis in this study, except where indicated. For use in assessing reproducibility and quality measures for Hi-C data, we designed a model for simulating noisy Hi-C experiments (Fig. 1a). Our noise model aims to simulate a contact matrix from a Hi-C experiment performed on chromatin that lacks any high-order structure, such as loops and topologically associating domains. For this purpose, our simulation models two main phenomena: the "genomic distance effect," i.e., the higher prevalence of crosslinks between genomic loci that are close together along the genome [1], and random ligations generated by the Hi-C protocol [24]. For the first phenomenon, we use real Hi-C data, and we sample from the empirical marginal distribution of counts as a function of genomic distance. The second phenomenon, random ligation noise, is modeled by generating Hi-C interactions between random bin pairs (see the "Methods" section for details). Counts generated by these two "noise" components of the model can be mixed with different proportions to produce simulated "pure noise" Hi-C matrices. We then mix the simulated contacts with experimental contact matrices in varying proportions to obtain noise-injected matrices. Overview of the study. a Schematic showing the approach for generating noise-injected Hi-C matrices. In the upper panel, we generate two types of noise from real Hi-C data (center): random ligation noise (right) and genomic distance effect noise (left). The three matrices are then mixed to generate noisy datasets (lower panel). By changing the mixing proportions, we can create datasets with varying percentages of noise. b To benchmark the performance of various quality control and reproducibility measures, we compiled a large number of Hi-C replicates from 13 cell types and simulated noise-injected datasets from the original data. Real and simulated datasets binned at different resolutions and downsampled to different coverage levels are the inputs to reproducibility and quality control measures where each replicate pair and single replicate are assigned a score. Performance of each measure is evaluated on their ability to correctly rank real and simulated datasets. c Summary of the basic principles of the four reproducibility methods evaluated in this study In addition to noise, we tested the effects of sparsity and the resolution of Hi-C matrices on the performance of each method. We profiled the effects of sparsity explicitly by downsampling real Hi-C matrices to contain a set of fixed total number of intrachromosomal Hi-C interactions. Binning resolution further controls the sparsity of a Hi-C matrix, at the same time dictating the scale of chromatin organization that can be observed in a Hi-C matrix. By binning deeply sequenced Hi-C datasets containing at least 400 million intrachromosomal Hi-C interactions from two cell types, we generated Hi-C matrices binned at high, mid, and low resolutions (10 kb, 40 kb, 500 kb) and used these to investigate the effect of resolution on each method as well (Additional file 1: Table S1). A schematic of the full range of datasets used in this study to validate each method is shown in Fig. 1b. Measures for quality and reproducibility of Hi-C data Four recently developed methods for measuring the quality of and reproducibility of Hi-C experiments were assessed in this study (Fig. 1c). HiCRep [34], GenomeDISCO [35], HiC-Spector [33], and QuASAR-Rep [36] measure reproducibility, and QuASAR-QC measures quality of Hi-C data. The four reproducibility methods we evaluate employ a variety of transformations of the Hi-C contact matrix. HiCRep stratifies a smoothed Hi-C contact matrix according to genomic distance and then measures the weighted similarity of two Hi-C contact matrices at each stratum. In this way, HiCRep explicitly corrects for the genomic distance effect and addresses the sparsity of contact matrices through stratification and smoothing, respectively. GenomeDISCO uses random walks on the network defined by the Hi-C contact map to perform data smoothing before computing similarity. The resulting score is sensitive to both differences in 3D DNA structure and differences in the genomic distance effect [35] and makes it thus more challenging for two contact maps to be reproducible, as they have to satisfy both criteria to be deemed similar. HiC-Spector transforms the Hi-C contact map to a Laplacian matrix and then summarizes the Laplacian by matrix decomposition. QuASAR calculates the interaction correlation matrix, weighted by interaction enrichment. The two variants of QuASAR, QuASAR-QC and QuASAR-Rep, both assume that spatially close regions of the genome will establish similar contacts across the genome, and they measure quality and reproducibility, respectively, by testing the validity of this assumption for a single and pair of replicates. Reproducibility measures correctly rank noise-injected datasets To assess the performance of the reproducibility measures, we simulated pairs of Hi-C matrices with varying noise levels. Intuitively, a good reproducibility measure should declare the least noisy replicate pair as most reproducible and the noisiest replicate pair as least reproducible. We paired a real Hi-C contact matrix with a noisier version of the same matrix using a wide range of simulated noise levels (5%, 10%, 15%, 20%, 30%, 40%, and 50%). This procedure yielded seven pairs of replicates for each of 11 different cell types. We performed this approach using two different sets of randomly generated noise matrices, using one-third genomic distance noise and two-thirds random ligation noise or vice versa. Each replicate pair was assigned a reproducibility measure by HiCRep, GenomeDISCO, HiC-Spector, QuASAR-Rep, and Pearson correlation. Our analysis showed that all reproducibility measures were able to correctly rank the simulated datasets. Averaged over 11 different cell types, we observed a monotonic trend for all of these measures (Fig. 2a). Indeed, for every cell type and every measure, increasing the noise level always led to a decrease in estimated reproducibility (Additional file 1: Figure S1). Qualitatively, the trends in Fig. 2a suggest that QuASAR and HiCRep may be more robust to noise than the other reproducibility measures. Comparison of reproducibility measures. a Curves showing the mean reproducibility score assigned to 11 cell types at each noise injection level for 33% and 66% random ligation noise configurations. Vertical bars represent one standard deviation away from the mean. b Reproducibility scores assigned to biological replicate (blue), non-replicate (red), and pseudo-replicate (purple) pairs for each cell type. Coverage values are the mean number of interactions for each pair of replicates. c Reproducibility scores assigned to biological replicate (blue), non-replicate (red), and pseudo-replicate (purple) pairs from six cell types at seven different coverage levels. Dashed lines indicate the empirical threshold for distinguishing biological replicate pairs from non-replicate pairs. d Reproducibility scores assigned to biological replicate (blue) and non-replicate (red) pairs for clone-8 and S2 cells from Drosophila. Each panel shows the separation between two replicate pair types for each Hi-C reproducibility measure. Dashed lines correspond to the empirical thresholds inferred from human Hi-C data Comparing the two noise models, we saw less consistent trends. HiC-Spector assigned higher reproducibility scores to matrices with 66% genomic distance noise and 33% random ligation noise. GenomeDISCO showed the opposite behavior whereas QuASAR-Rep, HiCRep, and Pearson correlation gave similar scores regardless of the underlying noise proportions. This variability suggests that the various reproducibility measures exhibit different sensitivities to different sources of noise, thus potentially yielding complementary assessments of reproducibility. Assessment using real datasets reveals differences among reproducibility measures Inevitably, any simulation approach is only as good as its underlying assumptions; thus, we also analyzed the performance of the four reproducibility measures using real data. Specifically, we asked whether the reproducibility measures can discriminate between pairs of independent Hi-C experiments repeated on the same cell type versus pairs of experiments from different cell types. In this setup, we used three types of replicate pairs: a single pair of matrices from the same cell type (which we call "biological replicates," although each pair represents the same cells being prepped twice, rather than two different sets of cells), pairs of matrices from different cell types (non-replicates), and pairs of matrices sampled from combined biological replicates (pseudo-replicates, see the "Methods" section for details about the generation of pseudo-replicates) [34]. We assigned a reproducibility score to every matrix pair for each measure and asked if reproducibility scores differ among replicate pair types. Because pseudo-replicates are generated from pooled biological replicates, their variation solely stems from statistical sampling, with no biological (including distance effect) or technical variance. Therefore, we expect pseudo-replicates to exhibit the highest reproducibility. Conversely, non-replicate pairs are expected to have the lowest degree of reproducibility, because they contain all the experimental variation observed in biological replicates, as well as cell type-specific differences in 3D chromatin organization. In contrast to the simulation analysis, the analysis using real datasets showed distinct differences among the five methods. For each of the 11 cell types and each reproducibility measure, we assigned reproducibility scores to a single biological replicate pair, 20 non-replicate pairs, and 3 pseudo-replicate pairs (Fig. 2b). The reproducibility score of a replicate pair is the score obtained by averaging reproducibility scores assigned to each chromosome. All four reproducibility measures and the Pearson correlation can separate replicate pair types from each other (Additional file 1: Figure S2); however, the reproducibility measures generally achieved clearer separation between different replicate pair types. These differences are statistically significant according to a one-sided Kolmogorov-Smirnov test (P < 0.01). In addition to the Pearson correlation, we considered the rank-based Spearman correlation as a potential method for assessing reproducibility. We also considered using either type of correlation in conjunction with ICE normalization. The results (Additional file 1: Figure S3) show that none of these four methods successfully separates biological replicate from non-replicate pairs. Intuitively, we prefer a measure that separates non-replicates from biological replicates with a clear margin. By this measure, the HiC-Spector measure yields the largest separation, followed by HiCRep, QuASAR-Rep, and GenomeDISCO (Fig. 2b). Among them, HiC-Spector and HiCRep correctly rank all replicate types for all 11 comparisons, with a clear separation between biological replicates and non-replicates. GenomeDISCO ranks a biological replicate lower than a non-replicate for a single case out of 11. The pair of biological replicates that GenomeDISCO ranks lower than non-replicates shows a marked difference in genomic distance effect (Additional file 1: Figure S4), to which this method is sensitive [35]. QuASAR-Rep is able to correctly rank biological replicates above non-replicates in 7 out of 11 cases. The cell types in which it fails have only 12 to 28 million interactions, suggesting that QuASAR-Rep does not perform well when coverage is low and the resolution is set to 40 kb. However, re-analysis of the same data suggests that switching to a larger resolution (120 kb) improves QuASAR-Rep's performance, leading to separation between replicates and non-replicates for all cell lines but two (data not shown). As expected, the Pearson correlation performs worse than the Hi-C-specific measures, ranking non-replicates higher than biological replicates in 7 cases. Pseudo-replicate reproducibility scores provide an upper bound for each reproducibility measure. In general, these scores show similar trends to those described above. For example, the Pearson correlation scores assigned to pseudo-replicates show a relatively wide separation from the rest of the scores, even though non-replicates and biological replicates are intermingled. On the other hand, GenomeDISCO, HiC-Spector, HiCRep, and QuASAR-Rep show the desired behavior: a high degree of separation between non-replicates and biological replicates, and a relatively small separation between biological replicates and pseudo-replicates. Reproducibility can be determined over a range of experimental coverage To directly investigate the effects of the coverage of a Hi-C experiment on the reproducibility measures, we downsampled real Hi-C matrices to contain fewer interactions and examined the effects on the resulting reproducibility scores. We limited this analysis to real data from six cell types with higher coverage, and we subsampled each replicate multiple times to contain 1 to 30 million total Hi-C interactions (see the "Methods" section for details). These datasets were used for testing the ability of each method to distinguish among different replicate types at lower coverage levels and for explicitly profiling the dependence of reproducibility scores on coverage levels. Hi-C reproducibility measures retained their ability to distinguish between replicate types, even at extremely low coverage levels. Visualization of the reproducibility scores revealed that the HiCRep, HiC-Spector, and GenomeDISCO measures successfully separate non-replicates from biological replicates even with only 5 million Hi-C interactions, a feat that Pearson correlation cannot achieve at even the highest coverage level (Fig. 2c). QuASAR-Rep can successfully separate biological replicates from non-replicates at 25 and 30 million interactions but fails to distinguish them when coverage is lower than 20 million interactions, consistent with the results from Fig. 2b. As before, pseudo-replicate pairs continue to serve as an upper bound for reproducibility measures. However, the separation between pseudo-replicates and biological replicates is reduced at lower coverage levels, and so is the separation between biological replicates and non-replicates. Furthermore, this analysis suggests we can infer empirical thresholds for these reproducibility measures that can effectively separate all biological replicates from non-replicates at a given coverage level, as explained in the "Methods" section. These empirical thresholds, selected as the midpoint between the most reproducible non-replicate pair and the least reproducible replicate pair, are shown as dashed lines in Fig. 2c and can be found in Additional file 1: Table S2. Consistent with the trends observed in the analysis of real datasets, the reproducibility of downsampled replicate pairs exhibits a dependence on sequencing depth. We observe that reproducibility scores associated with biological replicates become significantly smaller as coverage decreases, according to a one-sided Wilcoxon signed rank test (P < 0.05, Additional file 1: Figure S5). The HiCRep, GenomeDISCO, QuASAR-Rep, and Pearson correlation scores exhibit a statistically significant drop for every level of coverage. In contrast, reproducibility scores from HiC-Spector only start to significantly and consistently decay below 20 million interactions, exhibiting a lesser degree of dependence on the coverage level. This may be because the leading eigenvectors used by HiC-Spector tend to capture local or mesoscopic structures, which are less likely to be affected by coverage. Despite varying levels of dependence on coverage, downsampling analysis convincingly shows that all measures exhibit a dependence on coverage. Thus, coverage of different replicate pairs must be factored into reproducibility analyses, especially for comparative purposes. Reproducibility measures are robust to changes in resolution The resolution of a Hi-C matrix effectively dictates the scale of 3D organization observable from the data: a low-resolution matrix can only reveal compartments and TADs [1, 8], whereas high-resolution matrices reveal additional finer scale structures like chromatin loops [11]. To investigate the effect of resolution on reproducibility, we used deeply sequenced Hi-C replicates with at least 400 million intrachromosomal interactions generated from the HepG2 and HeLa cell lines. From these data, we generated real and simulated replicate pairs at 10-kb, 40-kb, and 500-kb resolution, and we measured the reproducibility of each replicate pair. HiCRep, GenomeDISCO, HiC-Spector, QuASAR-Rep, and Pearson correlation accurately measure reproducibility at both high and low resolutions. The four Hi-C-specific methods can correctly rank pseudo, biological, and non-replicate pairs at 10-kb, 40-kb and 500-kb resolutions (Fig. 3a) with a clear margin between biological replicate and non-replicate pairs. Surprisingly, we found that the Pearson correlation can correctly rank replicate types for these deeply sequenced datasets. Notably, the reproducibility scores from the four methods are largely independent of resolution. While GenomeDISCO and especially QuASAR-Rep exhibit some dependence of resolution, assigning lower reproducibility scores to replicates with lower coverage, they maintain a clear boundary with large margins between biological and non-replicates at all resolutions. However, the Pearson correlation exhibits a larger degree of dependence on resolution for all replicate pair types and maintains relatively smaller margins between non-replicate and biological replicate pairs. Simulated datasets further validate that reproducibility scores from each method decrease with increasing levels of noise at 10-kb, 40-kb and 500-kb resolution (Fig. 3b). Effects of resolution on reproducibility measures. a Reproducibility scores assigned to biological replicate (blue), non-replicate (red), and pseudo-replicate (purple) pairs from HepG2 and HeLa Hi-C datasets at 10-kb, 40-kb and 500-kb resolutions. b Reproducibility scores assigned to different cell types at different resolutions, plotted as a function of noise level. c Reproducibility scores assigned to downsampled biological replicate pairs at different resolutions. Both the HepG2 and HeLa datasets contain > 400 million read pairs Next, we used deeply sequenced datasets to further investigate the effect of coverage on reproducibility scores of biological replicates at three resolution levels using a wider range of coverage values (30, 60, 120, 240, and 400 million intrachromosomal interactions). For HiCRep, QuASAR-Rep, and GenomeDISCO, we observed that reproducibility scores tend to plateau at 240 million interactions at 10-kb and 40-kb resolutions, whereas reproducibility scores of 500-kb resolution matrices benefit little from higher coverage (Fig. 3c). Consistent with our previous observations, HiC-Spector exhibits a lower degree of dependence on coverage, with scores reaching maxima at 120 kb. Overall, the four Hi-C reproducibility measures exhibit robustness to coverage and resolution differences, as measured by their ability to distinguish between replicate and non-replicate pairs. Next, we tested whether the reproducibility measures can be used to select empirically the optimal resolution for a Hi-C dataset. Although resolution strongly influences almost every downstream analysis of Hi-C data, this parameter is generally set in an ad hoc fashion. To explore the performance of the measures as a function of the resolution parameter, we binned four pairs of biological replicates at an increasingly high resolution ranging from 40 kb, 20 kb, 10 kb and 5 kb and asked if the reproducibility scores of biological replicates decay significantly at higher resolutions. We chose six samples performed using HindIII with coverage values ranging from 15 million to 60 million interactions and two samples generated using DpnII and coverage of ~ 400 million interactions. We observed that the four reproducibility measures show variable trends in how reproducibility scores assigned to biological replicates decay with respect to increasing resolution (Additional file 1: Figure S6). For HiCRep, GenomeDISCO, and QuASAR-Rep, the HindIII replicates (A549, G410, and LNCaP) exhibit a decay in reproducibility scores, whereas the scores assigned to replicate pairs generated by DpnII (HepG2) are more robust to changes in resolution. Notably, for these three reproducibility measures, the degree of decay also correlates with the sequence coverage of the data. For HiC-Spector, we do not observe consistent trends. These observations generally support the idea that deeply sequenced replicates generated by a 4-cutter such as DpnII can support resolutions higher than 40 kb, whereas relatively shallow replicates (< 100 million read pairs) generated using a 6-cutter are not suitable for binning resolutions higher than 40 kb. However, given the lack of a clear elbow or maximum in Additional file 1: Figure S6, we do not recommend using reproducibility scores to attempt to select an appropriate resolution. Finally, we compared the run times of each reproducibility measure, using a large number of pairs of chromosome 21 contact matrices binned at 40-kb resolution. As seen in Additional file 1: Figure S7, QuASAR-Rep achieves the fastest median running time (0.82 s), followed by HiC-Spector (2.76 s), GenomeDISCO (5.77 s), and HiCRep (9.00 s). Reproducibility measures accurately quantify reproducibility of Hi-C data from non-human genomes We investigated whether the four Hi-C reproducibility measures can be applied to data derived from a non-human genome. We wanted to investigate a genome that is markedly different from human, but replicate Hi-C experiments in organisms other than human and mice are rare. We used Hi-C data from Ramirez et al., which has two biological replicates from two cell types (clone-8 and S2) from the fruitfly Drosophila melanogaster [37]. The fruitfly genome is approximately 18 times smaller than the human genome. For this analysis, we binned the Hi-C matrices at 10 kb and compared the reproducibility of the four large, non-heterochromatic chromosomes in Drosophila (chromosomes 2, 3, 4, and X). As before, we assigned reproducibility scores to each replicate pair and each non-replicate pair. The results show that biological replicate pairs are clearly separated from non-replicates for each measure in both cell types (Fig. 2d). Furthermore, for three out of the four reproducibility measure, the empirical thresholds that we inferred from the human Hi-C data (shown as dashed lines in Fig. 2d) generalize to the fruitfly genome. Noise reduces the consistency and the prevalence of higher order structures in Hi-C matrices Having investigated four different methods for evaluating the reproducibility of a given pair of Hi-C matrices, we now focus on methods for evaluating the quality of a single Hi-C matrix. As before, we perform this evaluation by injecting noise into real Hi-C data, producing a collection of 88 matrices corresponding to 11 cell types and 8 different noise profiles (see the "Methods" section). Among our four Hi-C reproducibility measures, only one (QuASAR-QC) provides a variant to assess the quality of a single matrix. The procedure yields a single, bounded summary statistic indicative of homogeneity of the underlying sample population and the signal-to-noise ratio of the interaction map. In addition to QuASAR-QC analysis, we profiled two well-known features of 3D organization: statistically significant long-range contacts [38, 39], which include DNA loops, and topologically associating domains (TADs). Intuitively, we expect that significant contacts and TADs should be harder to detect in noisy matrices and that such matrices should have a lower degree of consistency. Our analysis suggests that QuASAR-QC is indeed sensitive to the noise and the coverage of a Hi-C matrix. For each simulated Hi-C matrix from 11 cell types, QuASAR-QC detects a perfectly monotonic relationship between the noise level and the consistency of the matrix (Fig. 4a). The same trend is observed in deeply sequenced HepG2 and HeLa cell types at 10-kb, 40-kb, and 500 kb resolutions (Additional file 1: Figure S8). Although the majority of noise-free combined replicates are assigned a QuASAR-QC score ranging from 0.05 to 0.07, three cell types have strikingly lower QuASAR-QC scores ranging from 0.03 and 0.02. The Hi-C matrices from these three cell types (LNCaP, SKNDZ, SKNMC) contain fewer Hi-C interactions. Thus, the lower consistency scores are likely partially due to the sparsity that results from low experimental coverage (Additional file 1: Table S1). Furthermore, investigation of contact probabilities at given genomic distances for each cell type revealed that the three cell types with lower QuASAR-QC scores have significantly higher contact probabilities at genomic distances larger than 50 Mb (Additional file 1: Figure S9). Because such long-range contacts are unlikely to occur due to the organization of chromatin, it is likely that such long-range contacts represent random ligation of uncrosslinked DNA fragments, which is a known source of noise in a Hi-C experiment [24]. Thus, the QuASAR-QC measure is potentially sensitive to both the level of simulated noise and the differences in the level of inherent noise that each combined replicate contains. Quality measures. a QuASAR-QC scores assigned to noise-injected matrices from 11 cell types (b). Total number of significant contacts above a 5% FDR threshold from noise-injected matrices from 11 cell types. c Violin plots showing the distribution of TAD boundary distances between biological replicates and noise-injected replicates for T470 cells. There is no significant change in the distribution of TAD boundary distances at any given noise level. d QuASAR-QC scores assigned to downsampled replicates from six different cell types. e Total number of significant contacts above a 5% FDR threshold from downsampled replicates from six different cell types. f Violin plots showing the distribution of distances between domain boundaries in biological replicates and noise-injected replicates for T470 cells. In panels c and f, asterisks indicate that the distribution of boundary distances is significantly larger than the null distribution, which is obtained by comparing biological replicates Statistically significant mid-range (50 kb–10 Mb) interactions are depleted in noisy Hi-C matrices. We identified statistically significant Hi-C contacts using Fit-Hi-C [38] for each of the Hi-C matrices that make up our simulated dataset. Because robust identification of such contacts requires deeply sequenced datasets that contain large numbers of Hi-C interactions, we chose to use a somewhat liberal false discovery rate threshold of 0.05 to facilitate discovery of statistically significant contacts. For 11 cell types, we observed that 8 out of 11 cell types exhibit a perfect or near perfect anti-correlation between the injected noise percentage and the total number of significant interactions (Fig. 4b). For the other three cell lines (LNCaP, SKNDZ, SKNMC), Fit-Hi-C identifies almost no significant contacts with or without any noise injection, further supporting the conclusion that these Hi-C datasets have low quality. These three cell lines are also the cell lines that have the lowest QuASAR-QC scores, corroborating the results between these two independent analyses. For the deeply sequenced two datasets (HepG2 and HeLa), we observed a similar trend at both 10-kb and 40-kb resolutions, with a higher number of significant mid-range contacts due to the higher coverage, as expected (Additional file 1: Figure S10). Surprisingly, we found that topologically associating domain detection is highly robust to noise. We identified TADs using the insulation score [5, 40] method for the 88 simulated matrices, and we characterized the changes in the total number of TADs and TAD size distribution and the changes to TAD boundaries with respect to the noise level. The total number of identified TADs and their size distribution are only altered at the highest level of noise injection (Additional file 1: Figures S11 and S12). In addition, TAD boundaries between the original replicate and noise-injected levels exhibit the same degree of variation between two biological replicates, further supporting the idea that TAD boundaries identified with the insulation score approach are highly robust to noise (Fig. 4c, Additional file 1: Figure S13). Quality control measures require different levels of experimental coverage Continuing our assessment of Hi-C quality measures, we used downsampled Hi-C matrices to investigate the relationship between experimental coverage and each QC measure using a similar setup as before (see the "Methods" section). Quality control metrics exhibit a predictable dependence on the coverage of Hi-C matrices. For each of the six cell types we downsampled, we observed that QuASAR-QC scores are lower for Hi-C matrices with fewer interactions (Fig. 4d). We observe the same trend for deeply sequenced matrices at 10-kb and 40-kb resolutions; however, QuASAR-QC scores at 500 kb tend to benefit less from deeper coverage, likely because coarse resolutions do not require large numbers of Hi-C interactions (Additional file 1: Figure S14). Similarly, the number of statistically significant long-range interactions also decreases as we reduce the number of total Hi-C interactions. However, the number of significant interactions decreases at a much higher rate: even at 15 million interactions, most cell lines lose the majority of significant interactions (Fig. 4e). Larger numbers of significant interactions are detected in deeply sequenced datasets, due to added statistical power, but a similar relationship between coverage and number of significant contacts is observed at both 10-kb and 40-kb resolutions (Additional file 1: Figure S15). Conversely, we found that TADs detected by insulation score are robust to low coverage levels. Using the same approach for noise-injected datasets, we found that the total number of TADs and their size distribution are not altered by lower coverage (Additional file 1: Figures S16 and S17). Indeed, the distances between TAD boundaries identified at lower coverage and original replicates only differ from the baseline distribution at 10 million or fewer interactions (Fig. 4f, Additional file 1: Figure S18). Quality control measures are consistent with mapping statistics To further validate the performance of the quality control measures at our disposal, we investigated the relationship between the QuASAR-QC scores assigned to real Hi-C matrices and various read-mapping statistics that have been used previously to evaluate Hi-C data quality [24]. The four statistics we compared against are the percentages of fragment pairs that can be mapped uniquely to the genome (aligned pairs), fragment pairs from the same restriction fragments (invalid pairs), intrachromosomal interactions (intrachromosomal percentage), and fragment pairs that are repeated in the dataset (PCR duplicate rate). Overall, we observe varying degrees of correlation between the quality control measures and the mapping statistics for biological replicates. The percentage of aligned pairs is correlated with higher quality experiments, consistent with what one would intuitively expect from high-quality sequencing libraries (Fig. 5a). The percentage of invalid pairs is also weakly anti-correlated with QuASAR-QC scores, consistent with the fact that invalid pairs represent uninformative Hi-C interactions (Fig. 5b). However, we observed the highest degree of correlation between QuASAR-QC scores and intrachromosomal percentage (Fig. 5c). In a typical Hi-C experiment, a portion of interchromosomal interactions result from random ligation of non-crosslinked fragments; thus, a significant enrichment of interchromosomal interactions, which results in a depletion of intrachromosomal interactions, indicates a low-quality Hi-C experiment. In particular, six biological replicates with lower than 30% intrachromosomal interactions have the lowest QuASAR-QC scores; these replicates are from the LNCaP, SKNDZ, and SKNMC cell types. Analysis of downsampled data shows that this effect is not simply due to the overall lower sequencing depth of these three replicates (Additional file 1: Figure S19). These replicates were also identified to have lower quality in our simulation studies (Fig. 4a) and are depleted for significant mid-range interactions, establishing the consistency of quality control measures overall. We note that this finding is consistent with the previously suggested range of 40–60% intrachromosomal interactions for high-quality experiments [24]. The PCR duplicate rate is uncorrelated with QuASAR-QC. Note that the PCR duplicate rate may be influenced by overall coverage, which we have not controlled for in this experiment. Nonetheless, even for sets of experiments with very similar coverage (red dots in Fig. 5d), we observe very little correlation. Comparison of QuASAR-QC to mapping statistics. Scatter plots of QuASAR-QC scores of biological replicates from 13 cell types plotted against quality statistics that describe percentages of a successful mapping, b artifactual Hi-C fragments, c intrachromosomal interactions, and d PCR duplicates. Dots correspond to low coverage Hi-C replicates from 11 cell types generated using HindIII, and triangles correspond to replicates from two deeply sequenced cell types generated by DpnII. Red dots correspond to a subset of samples with very similar total coverage (138–171 million read pairs). Each plot lists two Pearson correlation coefficients: the correlations between the given statistic and QuASAR-QC scores for only the 11 HinDIII cell types and for all 13 cell types Discussion and conclusions We evaluated the recently proposed methods for measuring the quality and reproducibility of Hi-C experiments. Using a rich set of Hi-C experiments from a variety of human cell types, we tested whether these methods can identify reproducible and high-quality experiments. Furthermore, we generated Hi-C contact matrices with controlled levels of noise by designing a simulated noise injection process. Our analysis shows that these measures perform well and improve upon the shortcomings of using generic or qualitative approaches. The Hi-C reproducibility measures that we evaluated assess reproducibility more accurately than the Pearson or Spearman correlation for real and simulated datasets. In particular, measures specifically designed for Hi-C data can better distinguish subtle differences in the 3D organization of different cell types, because these methods directly account for the special noise properties of this data type that are overlooked by traditional similarity scores. Selecting an appropriate reproducibility measure for a given study may depend in part upon the goals of the study. A scientist may be primarily interested in a measure that distinguishes between biological replicates and non-replicates. Such a goal might be appropriate, for example, if the method will be used to check for sample swaps during large-scale experiments. In this setting, our results show that HiC-Spector often had the best margin among all four measures (Fig. 2b and c). This is true even when we place all four measures on a similar scale by using the variance associated with non-replicate pairs (data not shown). On the other hand, simply discriminating among replicates and non-replicates may not be sufficient in some contexts. If the study aims to use the reproducibility measure to quantify similarities among various experiments, then HiCRep has been shown previously to discriminate well among cell types [34], whereas the other methods in this study have not been examined in this fashion. Furthermore, our analysis also suggests that the different reproducibility measures may be more sensitive to different types of noise, with GenomeDISCO showing more sensitivity to random ligation noise than to genomic distance noise, HiC-Spector showing the opposite behavior, and QuASAR-Rep and HiCRep showing similar sensitivities to both types of noise (Fig. 2a). Because genomic distance noise preferentially affects short-range Hi-C interactions, this observation is consistent with the hypothesis that HiC-Spector largely focuses on local structures which are detected by short-range Hi-C interactions. Overall, QuASAR-Rep and HiCRep appear to exhibit an overall lower sensitivity to varying noise levels than HiC-Spector and GenomeDISCO. Also, GenomeDISCO tends to be more sensitive to differences in genomic distance effect between the samples compared [35]. The scores produced by all four reproducibility methods decrease in the presence of decreasing sequencing depth and fixed resolution or in the presence of increasing resolution at a fixed sequencing depth. Nonetheless, three out of four methods (GenomeDISCO, HiCRep, and HiC-Spector) show robustness to increasing sparsity, as measured by their ability to distinguish replicate from non-replicate pairs. Only QuASAR-Rep fails to measure reproducibility accurately for the most sparse datasets at high resolutions, though this effect is ameliorated if the data is analyzed using a lower resolution (data not shown). Thus, we hypothesize that one reason why GenomeDISCO and HiCRep perform well on low-coverage datasets is because they perform smoothing on the contact matrix. Overall, these results suggest that experimenters can assess whether a given set of samples are "reproducible enough" with as few as valid 5 million Hi-C interactions and then follow up with deeper sequencing. Among the four methods, HiC-Spector exhibits the least dependence on sequencing depth (Fig. 3c) or resolution (Additional file 1: Figure S6). These results are further consistent with the hypothesis that HiC-Spector focuses on local features of chromatin structure, which explains HiC-Spector's robustness to low coverage. Note that if the goal of a study is to quantify similarities among various experiments, then the dependence of reproducibility scores on data sparsity must be taken into account. For example, in our study, the SKMEL5 and SKNMC experiments differ in sequencing depth by a factor of 2. This difference could confound attempts to cluster or hierarchically organize cell types. In such a setting, all datasets should be randomly downsampled to a common sequencing depth prior to analysis. An important question is whether the methods and thresholds derived here will generalize to non-human genomes. Preliminary analysis (Fig. 2d) suggests that the empirical reproducibility thresholds derived for GenomeDISCO, HiCRep, and HiC-Spector may generalize to the much smaller Drosophila genome, whereas the QuASAR-Rep measure does not. However, this result is preliminary due to the small number of currently available, replicated Hi-C experiments in non-human genomes. The QuASAR-QC measure provides an interpretable score that can accurately rank simulated datasets according to noise levels and distinguish low-quality real Hi-C experiments from high-quality ones (in submission). This measure correlates with previously established statistics that indicate high quality in a Hi-C experiment and have been used as qualitative indicators of quality. Each of these statistics captures different sources of error in a Hi-C assay. In contrast, QuASAR-QC offers a single score that allows direct ranking of multiple experiments. Significant mid-range interactions, such as DNA loops, are also depleted in low-quality Hi-C experiments in both simulated and real datasets. Surprisingly, we found that TAD detection is fairly robust to all but high levels of noise, presumably because TAD detection only requires that a dataset contains a sufficient proportion of valid short-range Hi-C interactions and ignores mid- and long-range interactions. Unfortunately, it is challenging to convert the enrichment of such features into a quality control measure, due to other quality-independent biological processes which can cause variation of these features. However, a near total depletion of these features, mid-range interactions in particular, may certainly indicate lower quality overall. We anticipate that the reproducibility measures we evaluated in this study may be applicable to data from recently developed single-cell Hi-C assays [41,42,43]. The primary challenge, in this setting, would be the extreme sparsity of single-cell data. Our experiments show that, even when we randomly downsample to 1 million interactions per cell, all four methods are capable of distinguishing replicates from non-replicates (Fig. 2c), with the best separation provided by HiCRep. This difference may arise because HiCRep explicitly incorporates an explicit smoothing procedure; in contrast, GenomeDISCO uses an implicit smoothing procedure and the other two methods do not perform smoothing at all. Note that these results do not fully resolve the question of whether the reproducibility measures will generalize to single-cell data, because in addition to higher sparsity, the variance and noise characteristics of single-cell data are expected to markedly differ from those of bulk Hi-C data. Hence, exploring the applicability of these methods to single-cell Hi-C data more fully is an important direction for future research. An additional direction for future research is the development of alternative score functions that are designed to focus on particular aspects of chromatin architecture. For example, in the context of single-cell Hi-C analysis, measures have been developed that focus entirely on the genomic distance effect, for use in segregating cells according to cell cycle stages [42]. Similarly, for bulk or single-cell Hi-C, researchers may wish to separately assess whether two cells or cell types exhibit similar chromosome territories, compartment structure, domain structure, or patterns of looping interactions. Developing scores that separately assess these aspects of genome 3D architecture will facilitate automated inference from growing Hi-C datasets. We release a software package that incorporates the four reproducibility measures and the QuASAR-QC measure (https://github.com/kundajelab/3DChromatin_ReplicateQC). Until recently, proven measures have been lacking, and currently, there is no standard for measuring for quality and reproducibility of Hi-C data. This tool will both greatly simplify the task of measuring both the quality and reproducibility of Hi-C datasets robustly by using the methods we show to be accurate in this study. We also propose a set of empirical quality and reproducibility thresholds for use at various coverage levels, which are built into the software package to make it easy to determine whether samples pass quality and reproducibility standards (Additional file 1: Table S2). While the methods we compared are tailored for Hi-C data, similar chromosome conformation capture assays such capture Hi-C [44] and ChIA-PET [45] are used to study three-dimensional interactions in the genome. These assays differ from Hi-C due to their targeted nature; however, they share many properties of Hi-C assay, such as the genomic distance effect, and can be represented as a contact matrix similar to Hi-C [46, 47]. Reproducibility and quality measures of these assays are lacking in general, raising the possibility of adaptation of the methods we evaluate here to these assays. In summary, we show that the recently proposed Hi-C quality and reproducibility measures accurately measure these qualities on a large collection of real and simulated data. By profiling various parameters of Hi-C contact matrices, we describe best practices for applying and interpreting these measures. We also make available a convenient software tool that simplifies the application of these measures to Hi-C datasets. We hope that adoption of this standard toolkit will help to improve the quality and reproducibility of Hi-C data generated in the future. Measures of reproducibility HiCRep This method assesses reproducibility by taking into account two dominant spatial features of Hi-C data: distance dependence and domain structure. The method first smooths the given Hi-C matrices to help capture domain structures and reduce stochastic noise due to insufficient sampling. It then addresses the distance-dependence effect by stratifying Hi-C data according to genomic distance. Specifically, the method consists of two stages. In the first stage, HiCRep smooths the Hi-C raw contact map using a 2D mean filter, which replaces the read count of each contact with the average counts of all contacts in its neighborhood. The neighborhood size is obtained from a deeply sequenced benchmark dataset using a training procedure. In this analysis, neighborhood size parameter of 20, 5, and 1 are used for the resolutions of 10 kb, 40 kb, and 500 kb, respectively. Smoothing improves the contiguity of regions with elevated interaction, consequently enhancing the domain structures. In the second stage, HiCRep takes into account the distance-dependence effect by a stratification and aggregation strategy. This stage consists of two steps. The algorithm first stratifies the contacts according to the genomic distances of the contacting loci and computes the correlation coefficients within each stratum. HiCRep then assesses the reproducibility of the Hi-C matrix by applying a novel stratum-adjusted correlation coefficient statistic (SCC) to aggregate the stratum-specific correlation coefficients using a weighted average, with the weights derived from the Cochran-Mantel-Haenszel (CMH) statistic. The SCC has a range of [− 1, 1] and is interpreted in a way similar to the standard correlation coefficient. GenomeDISCO This method focuses on two key aspects of contact maps: the need for smoothing and the multiscale nature of these maps. The need for smoothing arises because contact maps are insufficiently sampled, especially at low sequencing depths. This means that a pair of genomic regions can exhibit a low count either from a lack of contact or from insufficient sampling. This problem is addressed by smoothing the data, essentially assuming that two contact maps are reproducible as long as they capture similar higher order structures, even if they differ in terms of individual contacts. GenomeDISCO investigates contact maps at multiple scales by comparing them at different levels of smoothing and computing a reproducibility score that takes all these comparisons into account. The smoothing approach is based on random walks on networks. Each contact map is treated as a network, where each node is a genomic region and each edge is weighted by the Hi-C count matrix, following normalization. In this work, square root was used for normalization, but similar results were obtained by using alternative normalization methods, including simple row- and column-based normalization or Knight-Ruiz normalization [48] (data not shown). Random walks are performed on networks to smooth the data, asking for each pair of nodes what is the probability of reaching node i from node j, if t steps are allowed in a random walk biased by the edge weights. The smoothed data can be computed by raising the adjacency matrix of our weighted network to the power t. Lower values of t perform local smoothing of the data, revealing structures such as domains, while larger values of t emphasize compartments. This graph-based smoothing scheme aims to preserve sharp domain boundaries that 2D methods may dilute. To obtain the GenomeDISCO reproducibility score, each contact map is separately smoothed across a range of t values. For each value of t, the L1 distance (i.e., the sum of the absolute values in the difference matrix) between the two smoothed contact maps is computed and normalized by the average number of nodes with non-zero total counts across the two original contact maps compared. Afterward, a combined distance between the two contact maps is obtained by computing the area under the curve of the L1 difference as a function of t. This allows us to consider multiple levels of smoothing and thus multiple scales when computing our scores. Finally, this distance is converted into a reproducibility score as follows: $$ \mathrm{Reproducibility}=1-\left(\mathrm{combined}\ \mathrm{distance}\right) $$ This score is in the range [− 1, 1], with higher scores representing higher reproducibility. This is because, for each node, the maximum L1 difference is 2, corresponding to the case when the node has mutually exclusive contacts in the two contact maps being compared. Thus, the combined distance lies in the range [0, 2], making the reproducibility score fall in the range [− 1, 1]. Parameter optimization on an orthogonal dataset revealed the optimal t = {3} [34], which was used in this study. In all pairwise comparisons in this paper, the sample with higher coverage was downsampled to match the coverage of the other sample. HiC-Spector The starting point of spectral analysis is the Laplacian matrix L, which is defined as L = D − W, where W is a symmetric and non-negative matrix representing a chromosomal contact map and D is a diagonal matrix in which \( {D}_{ii}={\sum}_j{W}_{ij} \). The matrix L is further normalized by the transformation D−1/2LD−1/2, and its leading eigenvectors are found. As in other commonly used dimensionality reduction procedures, the first few eigenvalues are of particular importance because they capture the basic structure of the matrix, whereas the latter eigenvalues are essentially noise. Given two contact maps WA and WB, their corresponding Laplacian matrices LA and LB and corresponding eigenvectors are calculated. Let \( \Big\{{\lambda}_0^A,{\lambda}_1^A \),…,\( {\lambda}_{n-1}^A \)} and \( \Big\{{\lambda}_0^B,{\lambda}_1^B \),…,\( {\lambda}_{n-1}^B \)} be the spectra of LA and LB and \( \Big\{{\upsilon}_0^A,{\upsilon}_1^A \),…,\( {\upsilon}_{n-1}^A \)} and \( \Big\{{\upsilon}_0^B,{\upsilon}_1^B \),…,\( {\upsilon}_{n-1}^B \)} be their normalized eigenvectors. A distance metric is defined as: $$ {S}_d\left(A,B\right)={\sum}_{i=0}^{r-1}\left\Vert {v}_i^A-{v}_i^B\right\Vert . $$ Here ‖‖ represents the Euclidean distance between the two vectors. The parameter r is the number of leading eigenvectors used. In general, Sd provides a metric to gauge the similarity between two contact maps. The distance is then linearly rescaled to a reproducibility score ranging from 0 to 1. QuASAR-Rep The Quality Assessment of Spatial Arrangement Reproducibility (QuASAR) measure uses the concept that within a distance matrix, as the distance between two features approaches zero, the correlation between the rows corresponding to those two features approaches one. This relationship is exploited by calculating the interaction correlation matrix, weighted by interaction enrichment. To determine reproducibility across replicates, the correlation of weighted correlation matrices is calculated as follows. In every case, matrices are first filtered by removing intrachromosomal interaction matrix rows and columns such that all remaining rows and columns contain at least one non-zero entry within 100 bins up- or downstream of the diagonal. The background signal-distance relationship is estimated as the mean number of reads for each inter-bin distance. The interaction correlation matrix is calculated across all pairwise sets of rows and columns within 100 bins of each other from the log-transformed enrichment matrix (non-filtered counts divided by background signal-distance values), excluding bins falling on the diagonal in either set. For a given pair of rows A and B, the correlation is calculated from all columns within 100 bins of both A and B, excluding filtered columns. The interaction matrix is then found by adding 1 to valid entries and taking the square root. The weighted correlation matrix is an element-wise product of the correlation matrix and the interaction matrix divided by the sum of all valid interaction matrix entries. The replication score is the correlation of weighted correlation matrices between two samples. Note that, to distinguish the use of QuASAR for assessing reproducibility versus data quality (described below), we refer in the main text to "QuASAR-Rep" and "QuASAR-QC." Processing of reproducibility scores All the reproducibility measures we use in this study assign a reproducibility score to a pair of Hi-C contact matrices. Due to the sparsity and noise nature of interchromosomal matrices, reproducibility scores are only calculated for intrachromosomal matrices. The final reproducibility score assigned to a pair of Hi-C experiments in this study is the mean of the reproducibility scores assigned to pairs of Hi-C contact matrices of each chromosome. Empirical reproducibility score thresholds To infer empirical thresholds for distinguishing non-replicates for biological replicate pairs for each method, we used the distribution of reproducibility scores assigned to non-replicate pairs and biological replicate pair at a given coverage level. Similar to the concept of a maximal margin hyperplane, the empirical threshold we inferred is the midpoint of the reproducibility score of the highest scoring non-replicate pair and the reproducibility score of the lowest scoring biological replicate pair. For each coverage level from 30 million Hi-C interactions to 5 million interactions, we inferred a single empirical threshold for each reproducibility metric. These thresholds are available in Additional file 1: Table S2. Measures of quality QuASAR-QC The sample quality measure from QuASAR ("QuASAR-QC") uses the same transformation as described above for reproducibility. However, instead of looking at weighted correlation matrices between samples, the quality score is found by taking the weighted correlation mean across all chromosomes and then subtracting the unweighted correlation mean across all chromosomes. TAD boundary calling and analysis TAD boundaries were identified using the insulation score [40]. This score captures the density of signal in the Hi-C contact matrix around the diagonal, as a function of genomic position. Because the signal is weaker at the boundary of two TADs, minima in the insulation score profile correspond to TAD boundaries. We used the TAD calling software described in Giorgetti et al. [5], employing the previously used parameters (--ss 80000 --im iqrMean --is 480000 --ids 320000) for calculation of the insulation score and identification of minima. To characterize the effects of noise and coverage on TAD boundary identification, we used noise-injected and downsampled datasets as explained before and used insulation score method as described in the previous section. For noise-injected datasets, we found that the number of identified TADs across the genome is only altered at the highest noise levels: the number of total TADs increased only by 5% with 50% noise injection (Additional file 1: Figure S11). Consistent with the changes in the total number of TADs, the distribution of TAD sizes is only altered at high noise levels. For 7 out of 11 cell types, we detect a statistically significant reduction in the TAD size distribution (P < 0.01, Kolmogorov-Smirnov test) only at either 40% or 50% noise (Additional file 1: Figure S12). Furthermore, positions of TAD boundaries are not altered with increasing noise levels. For 11 cell types, we calculated the distances between the TAD boundaries of the combined noise-free biological replicate and the TAD boundaries from noise-injected replicates. These distances were compared against the TAD boundary distances from biological replicate pairs, which serves as a baseline for how much the TAD boundaries fluctuate between different replicates from the same cell type. Again, we found that the boundary distances are significantly larger than the baseline distribution (one-sided KS test, P < 0.05) only at the 50% noise level for four cell types and never larger for the remaining four cell types (Fig. 4c, Additional file 1: Figure S13). We adopted the same approach for investigating the effect on coverage on insulation score-identified TADs. For each of the six downsampled cell lines, we identified TADs using insulation score method and compared the total number of TADs, the size distribution of TADs, and the differences between TAD boundaries between the original replicate and downsampled replicates. We observe that the total number of TADs detected and TAD size distributions are similar at all coverage levels (Additional file 1: Figures S16 and S17). We calculated the distances between TAD boundaries identified from downsampled replicates against the TAD boundaries from original biological replicates, and we compared this distribution against the distances between biological replicates as a baseline. For five of the six cell types, downsampling causes the TAD boundaries to shift away from the original boundaries significantly (Kolmogorov-Smirnov test, P < 0.05) only 10 million and lower number of interactions, further supporting the idea that TAD boundary by insulation score detection is mostly robust to low coverage (Fig. 4f, Additional file 1: Figure S18). Number of significant contacts For a given normalized Hi-C contact map, we computed the number of contacts that are deemed statistically significant using Fit-Hi-C [38]. Hi-C contact maps were binned at the 40-kb resolution and normalized using the Knight-Ruiz matrix balancing algorithm [42]. Deeply sequenced Hi-C data from two cell types were binned at 10-kb and 40-kb resolutions for Fit-Hi-C analysis (Additional file 1: Table S2). Fit-Hi-C assigns a statistical significance to each contact between two bins by assigning a P value and a q value. For each experiment, we counted the number of contacts that are above a given q value threshold for every intrachromosomal interaction and aggregated them over all chromosomes and used this sum as the total number of significant contacts for a given experiment. Mapping statistics We have used three statistics to summarize alignment quality, valid Hi-C fragment pairs, and the ratio of intrachromosomal and interchromosomal Hi-C interactions. A thorough description of these statistics and their application is reviewed in Lajoie et al. [24]. The first statistic we use is the percentage of aligned pairs, which corresponds to the percentage of Hi-C fragment pairs that uniquely map to the genome on both sides. Typically, single-sided and non-unique alignments are discarded in Hi-C pipelines [23, 24]. The second statistic is invalid pairs, which is the percentage of aligned pairs that map against the same restriction fragment. These fragment pairs are non-informative since they do not correspond to a fragment between two different regions [24]. The third statistic is the percentage of intrachromosomal valid pairs. Random ligations are much more likely to result in interchromosomal fragments; thus, a high ratio of non-informative random ligation events results in an enrichment of interchromosomal interactions and a depletion of intrachromosomal interactions [24]. The fourth statistic is the percentage of PCR duplicates, which is estimated from the number of aligned pairs that map to the exact same coordinates as another aligned pair [24]. Simulation of noisy Hi-C matrices To generate noise for Hi-C data in a realistic manner, we simulated two Hi-C contact matrices that would result from two processes that are not dictated by the 3D organization of chromatin. These "pure noise" matrices are mixed with the real Hi-C contact matrix to generate the final, noisy Hi-C matrix. The first noise matrix models the genomic distance effect, namely the higher probability of observing a Hi-C interaction between two regions that are close along the one-dimensional length of a chromosome. Because such regions are constrained to be close to each other, they are more likely to interact compared to more distal regions, in the absence of any higher order structure. This effect has been documented early on and is generally corrected in Hi-C contact matrices to better visualize medium- and long-range interactions [1]. The second noise matrix models the ligation of non-crosslinked DNA fragments during the ligation step of the Hi-C protocol. Fragment pairs that result from random ligation are uninformative since they can link two regions independently of 3D organization. Additionally, the Hi-C assay is subject to the same biases that other next-generation sequencing assays suffer from. These biases' results include a bias in favor of GC-rich regions and a bias against regions of low mappability. During the generation of both types of noise matrices, we factored in such biases by using the sum of each row as a proxy for the overall bias of a bin. Coverage normalization of Hi-C matrices [1] similarly uses marginals to counter such biases. To generate the genomic distance noise matrix G, we sampled from empirical distributions derived from real Hi-C matrix. In this setting, the genomic distance D is defined as the number of bins that lie between a pair of bins i and k, i.e., ∣i − k ∣ = D. For every value of D, we build a vector S by collecting the set of real Hi-C matrix entries Mik for which ∣i − k ∣ = D. We then randomly select values from S for insertion into G, again considering only entries Gik for which ∣i − k ∣ = D. This sampling strategy effectively shuffles the matrix entries in M at a fixed distance, thus preserving the original genomic distance effect while disrupting other higher order structures. However, instead of uniformly sampling from S, we adopted a stratified sampling strategy to better model GC and mappability biases. Specifically, S was broken into multiple strata before sampling. The strata are determined by products of marginals, i.e., Mik is assigned to a certain stratum based on the product of the marginals of bin i and bin k. For a given value of D, we chose stratum size in such a way that each stratum contains 100 elements. When sampling the Gik, we sampled a value from the stratum that Mik belongs to. By repeating the stratified sampling for every value of D, the final matrix G is obtained. To generate the random ligation noise matrix R, we generated random Hi-C interactions and aggregated them to build a Hi-C contact matrix. We generated these interactions by randomly choosing two bins i and k and adding one to the matrix entry Rik in the random noise contact matrix. Instead of sampling the bins uniformly, the probability of sampling a bin was set proportional to marginal of that bin, thus modeling the GC and mappability bias of each bin. The sampling process was repeated N times, where N is the total number of interactions in the original Hi-C contact matrix M, to generate a random ligation noise matrix. After both noise matrices are generated from the original Hi-C matrix, these matrices were mixed in varying proportions to generate a series of noisy Hi-C matrices. Each such matrix is a mixture of three matrices: a real matrix, a genomic distance noise matrix, and a random ligation noise matrix. To generate a simulated matrix with c total counts from, we sampled counts uniformly at random from one real and two simulated matrices at a given target ratio. In practice, we varied the total proportion of noise from 0 to X%, and for each total noise level, we consider two settings for the relative proportions of genomic distance noise random ligation noise: we either used one third of matrix G and two thirds of matrix R, or vice versa. We note that most analyses in this study were robust to either scenario. The software for injecting noise into Hi-C contact matrices is available at https://github.com/gurkanyardimci/hic-noise-simulator. Downsampling Downsampled datasets were generated by converting an input Hi-C matrix into a set of pairwise individual intrachromosomal interactions and uniformly sampling a given number of interactions from this set. Following downsampling, we re-binned the set of chosen interactions into a Hi-C matrix. For analysis of reproducibility measures, we limited the analysis to real data from six cell types with replicates of at least 30 million interactions, and we downsampled each individual replicate to have a wide range of total interactions (30 × 106, 25 × 106, 20 × 106, 15 × 106, 10 × 106, 5 × 106, 106). Using a single pseudo-replicate and a single biological replicate pair for each cell type and 15 non-replicates at each coverage level, we generated a total of 189 replicate pairs. These datasets were used for testing the ability of each method to distinguish among different replicate types at lower coverage levels and for explicitly profiling the dependence of reproducibility scores on coverage levels. For the analysis of QC measures, we generated downsampled biological replicates from the same six cell types to have fewer interactions (30 × 106, 25 × 106, 20 × 106, 15 × 106, 10 × 106, 5 × 106, 106), resulting in a set of 84 matrices. In addition, we applied the same setup to deeply sequenced datasets from two cell types at a wider range of coverage values (30 × 106, 60 × 106, 120 × 106, 240 × 106, 400 × 106), at multiple resolutions, resulting in 30 matrices. For each downsampled matrix, we calculated QuASAR scores and identified statistically significant long-range contacts and TAD boundaries. Generation of pseudo-replicates Given two biological replicate experiments, we generated pseudo-replicates by aggregating the two replicates and downsampling from the combined matrix. Combination of two biological replicates is performed by summing the two Hi-C contact matrices of these replicates. Following combination, the resulting combined Hi-C matrix is downsampled as described above to generate pseudo-replicates. We forced the pseudo-replicates to have the average of the total number of interactions of two seed biological replicates. Resolution analysis To investigate whether an optimum resolution exists for a given sample, we profiled the reproducibility scores assigned to biological replicate pairs from four cell types: A549, G410, LNCaP, and HepG2. Hi-C data from the first three cell types were generated by the HindIII restriction enzyme, whereas the HepG2 data was generated using DpnII. These samples also exhibit differing levels of coverage (Additional file 1: Table S1). In this analysis, we binned the contact matrix of each replicate at 40-kb, 20-kb, 10-kb, and 5-kb resolution and calculated the various reproducibility scores assigned to each biological replicate pair. For this analysis only, we limited the computation of reproducibility scores to the contact matrices of chr22. Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–93. Dixon JR, Jung I, Selvaraj S, Shen Y, Antosiewicz-Bourget JE, Lee AY, et al. Chromatin architecture reorganization during stem cell differentiation. Nature. 2015;518:331–6. Krijger PHL, Di Stefano B, De Wit E, Limone F, Van Oevelen C, De Laat W, et al. Cell-of-origin-specific 3D genome structure acquired during somatic cell reprogramming. Cell Stem Cell. 2016;18:597–610. Ma W, Ay F, Lee C, Gulsoy G, Deng X, Cook S, et al. Fine-scale chromatin interaction maps reveal the cis-regulatory landscape of lincRNA genes in human cells. Nat Methods. 2015;12:71–8. Giorgetti L, Lajoie BR, Carter AC, Attia M, Zhan Y, Xu J, et al. Structural organization of the inactive X chromosome in the mouse. Nature. 2016;535:575–9. Darrow EM, Huntley MH, Dudchenko O, Stamenova EK, Durand NC, Sun Z, et al. Deletion of DXZ4 on the human inactive X chromosome alters higher-order genome architecture. Proc Natl Acad Sci U S A. 2016;113:E4504–12. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC4978254. Naumova N, Imakaev M, Fudenberg G, Zhan Y, Lajoie BR, Mirny LA, et al. Organization of the mitotic chromosome. Science. 2013;342:948–53. Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, et al. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485:376–80. Nora EP, Lajoie BR, Schulz EG, Giorgetti L, Okamoto I, Servant N, et al. Spatial partitioning of the regulatory landscape of the X-inactivation centre. Nature. 2012;485:381–5. Sexton T, Yaffe E, Kenigsberg E, Bantignies F, Leblanc B, Hoichman M, et al. Three-dimensional folding and functional organization principles of the Drosophila genome. Cell. 2012;148:458–72. Rao SSP, Huntley MH, Durand N, Neva C, Stamenova EK, Bochkov ID, et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;59:1665–80. Jin F, Li Y, Dixon JR, Selvaraj S, Ye Z, Lee AY, et al. A high-resolution map of the three-dimensional chromatin interactome in human cells. Nature. 2013;503:290–4. Schmitt AD, Hu M, Ren B. Genome-wide mapping and analysis of chromosome architecture. Nat Rev. 2016;17:743–55. Barski A, Cuddapah S, Cui K, Roh TY, Schones DE, Wang Z, et al. High-resolution profiling of histone methylations in the human genome. Cell. 2007;129:823–37. Boyle AP, Davis S, Shulha HP, Meltzer P, Margulies EH, Weng Z, et al. High-resolution mapping and characterization of open chromatin across the genome. Cell. 2008;132:311–22. Landt SG, Marinov GK, Kundaje A, Kheradpour P, Pauli F, Batzoglou S, et al. ChIP-seq guidelines and practices of the ENCODE and modENCODE consortia. Genome Res. 2012;22:1813–31. Available from:. https://doi.org/10.1101/gr.136184.111. Li Q, Brown JB, Huang H, Bickel PJ. Measuring reproducibility of high-throughput experiments. Ann Appl Stat. 2011;5:1752–79. Qin Q, Mei S, Wu Q, Sun H, Li L, Taing L, et al. ChiLin: a comprehensive ChIP-seq and DNase-seq quality control and analysis pipeline. BMC Bioinformatics. 2016;17:404. Ji H, Jiang H, Ma W, Johnson DS, Myers RM, Wong WH. An integrated system CisGenome for analyzing ChIP-chip and ChIP-seq data. Nat Biotechnol. 2008;26:1293 NIH Public Access. Frank CL, Liu F, Wijayatunge R, Song L, Biegler MT, Yang MG, et al. Regulation of chromatin accessibility and Zic binding at enhancers in the developing cerebellum. Nat Neurosci. 2015;18:647–56. Bardet AF, He Q, Zeitlinger J, Stark A. A computational pipeline for comparative ChIP-seq analyses. Nat Protoc. 2012;7:45–61. Ho JWK, Bishop E, Karchenko PV, Nègre N, White KP, Park PJ. ChIP-chip versus ChIP-seq: lessons for experimental design and data analysis. BMC Genomics. 2011;12:134. Ay F, Noble WS. Analysis methods for studying the 3D architecture of the genome. Genome Biol. 2015;16:1–15 Springer. Lajoie BR, Dekker J, Kaplan N. The Hitchhiker's guide to Hi-C analysis: practical guidelines. Methods. 2015;72:65–75. Tjong H, Gong K, Chen L, Alber F. Physical tethering and volume exclusion determine higher-order genome organization in budding yeast. Genome Res. 2012;22:1295–305. Hu M, Deng K, Selvaraj S, Qin Z, Ren B, Liu JS. HiCNorm: removing biases in Hi-C data via Poisson regression. Bioinformatics. 2012;28:3131–3. Gorkin DU, Leung D, Ren B. The 3D genome in transcriptional regulation and pluripotency. Cell Stem Cell. 2014;14(6):771–5. van Berkum NL, Lieberman-Aiden E, Williams L, Imakaev M, Gnirke A, Mirny LA, et al. Hi-C: a method to study the three-dimensional architecture of genomes. J Vis Exp. 2010;6:1869 Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3149993&tool=pmcentrez&rendertype=abstract. Teng M, Love MI, Davis CA, Djebali S, Dobin A, Graveley BR, et al. A benchmark for RNA-seq quantification pipelines. Genome Biol. 2016;17:74. Imakaev M, Fudenberg G, McCord RP, Naumova N, Goloborodko A, Lajoie BR, et al. Iterative correction of Hi-C data reveals hallmarks of chromosome organization. Nat Methods. 2012;9:999–1003. Serra F, Baù D, Goodstadt M, Castillo D, Filion G, Marti-Renom MA. Automatic analysis and 3D-modelling of Hi-C data using TADbit reveals structural features of the fly chromatin colors. PLoS Comput Biol. 2017;13:175. Nagano T, Várnai C, Schoenfelder S, Javierre BM, Wingett SW, Fraser P. Comparison of Hi-C results using in-solution versus in-nucleus ligation. Genome Biol. 2015;16. Yan KK, Yardlmcl GG, Yan C, Noble WS, Gerstein M. HiC-spector: a matrix library for spectral and reproducibility analysis of Hi-C contact maps. Bioinformatics. 2017;33(14):2199–201. Yang T, Zhang F, Yardimci GG, Song F, Hardison RC, Noble WS, et al. HiCRep: assessing the reproducibility of Hi-C data using a stratum-adjusted correlation coefficient. Genome Res. 2017;gr.220640.117. http://genome.cshlp.org/lookup/doi/10.1101/gr.220640.117. Ursu O, Boley N, Taranova M, Wang YXR, Yardimci GG, Noble WS, et al. GenomeDISCO: a concordance score for chromosome conformation capture experiments using random walks on contact map graphs. bioRxiv. 2017:181842 Available from: https://www.biorxiv.org/content/early/2017/08/29/181842. [cited 2018 Jan 30]. Cold Spring Harbor Laboratory. Sauria ME, Taylor J. QuASAR: Quality Assessment of Spatial Arrangement Reproducibility in Hi-C data. bioRxiv. 2017:204438 Available from: https://www.biorxiv.org/content/early/2017/11/14/204438. Ramírez F, Lingg T, Toscano S, Lam KC, Georgiev P, Chung HR, et al. High-affinity sites form an interaction network to facilitate spreading of the MSL complex across the X chromosome in Drosophila. Mol Cell. 2015;60:146–62. Ay F, Bailey TL, Noble WS. Statistical confidence estimation for Hi-C data reveals regulatory chromatin contacts. Genome Res. 2014;24:999–1011. Carty M, Zamparo L, Sahin M, González A, Pelossof R, Elemento O, et al. An integrated model for detecting significant chromatin interactions from high-resolution Hi-C data. Nat Commun. 2017;8. Crane E, Bian Q, McCord RP, Lajoie BR, Wheeler BS, Ralston EJ, et al. Condensin-driven remodelling of X chromosome topology during dosage compensation. Nature. 2015;523:240–4. Nagano T, Lubling Y, Stevens TJ, Schoenfelder S, Yaffe E, Dean W, et al. Single-cell Hi-C reveals cell-to-cell variability in chromosome structure. Nature. 2013;502:59–64. Nagano T, Lubling Y, Várnai C, Dudley C, Leung W, Baran Y, et al. Cell-cycle dynamics of chromosomal organization at single-cell resolution. Nature. 2017;547:61–7. Ramani V, Deng X, Qiu R, Gunderson KL, Steemers FJ, Disteche CM, et al. Massively multiplex single-cell Hi-C. Nat Methods. 2017;14:263–6. Hughes JR, Roberts N, McGowan S, Hay D, Giannoulatou E, Lynch M, et al. Analysis of hundreds of cis-regulatory landscapes at high resolution in a single, high-throughput experiment. Nat Genet. 2014;46:205–12. Fullwood MJ, Ruan Y. ChIP-based methods for the identification of long-range chromatin interactions. J Cell Biochem. 2009;107:30–9. Phanstiel DH, Boyle AP, Heidari N, Snyder MP. Mango: a bias-correcting ChIA-PET analysis pipeline. Bioinformatics. 2015;31:3092–8. Cairns J, Freire-Pritchett P, Wingett SW, Várnai C, Dimond A, Plagnol V, et al. CHiCAGO: robust detection of DNA looping interactions in Capture Hi-C data. Genome Biol. 2016;17:127. Knight P, Ruiz D. A fast algorithm for matrix balancing. IMA J Numer Anal. 2013;33:1029–47. We thank Giancarlo Bonora and Kate Cook for helpful discussions. G.G.Y and W.S.N are supported by awards NIH U41HG007000, U24HG009446. H.O. is supported by DK107980. B.R.J is supported by awards HG004592, HG003143 and J.D. is supported by awards HG004592, HG003143, DK107980. M.E.G.S. and J.T. is supported by awards NIH R24 DK106766, U41 HG006620. O.U is supported by Howard Hughes Medical Institute International Student Research Fellowship and a Gabilan Stanford Graduate Fellowship award and A.K. is supported by awards NIH DP2OD022870, U24HG009397, R01ES025009-02S1. T.Y. is supported by NIH T32 GM102057 (CBIOS training program to The Pennsylvania State University), a Huck Graduate Research Innovation Grant and Q.L. is supported by award NIH R01GM109453. The Hi-C data used for generating simulated hi-c matrices and the evaluation of quality control and reproducibility methods is publicly available at https://www.encodeproject.org/. The accession code of each dataset is available in Additional file 1: Table S2. We also list the ENCODE cell line names and the corresponding cell type IDs in the following table. Biosample ENCODE sample IDs A549 ENCSR444WCZ CAKI2 ENCSR401TBQ G401 ENCSR079VIJ LNCaP ENCSR346DCU NCIH460 ENCSR489OCU PANC1 ENCSR440CTR RPMI7951 ENCSR862OG SKMEL5 ENCSR312KHQ SKNDZ ENCSR105KFX SKNMC ENCSR834DXR T47D ENCSR549MGQ HepG2 ENCSR194SRI HeLa ENCSR693GXU The code for running reproducibility and quality controls methods is publicly available at https://github.com/kundajelab/3DChromatin_ReplicateQC. 3Dchromatin_ReplicateQC is licensed under the MIT license. The version of 3DChromatin_ReplicateQC that was used to generate the results presented in this study is available at https://doi.org/10.5281/zenodo.1208922. Software for simulating noise injected matrices can be accessed at https://github.com/gurkanyardimci/hic-noise-simulator. Department of Genome Sciences, University of Washington, Seattle, USA Galip Gürkan Yardımcı & William S. Noble Program in Systems Biology, University of Massachusetts Medical School, Worcester, USA Hakan Ozadam , Bryan R. Lajoie & Job Dekker Biology Department, Johns Hopkins University, Baltimore, USA Michael E. G. Sauria & James Taylor Department of Genetics, Stanford University, Stanford, USA Oana Ursu & Anshul Kundaje Department of Computational Biology, St. Jude Children's Research Hospital, Memphis, USA Bioinformatics and Genomics Program, Huck Institutes of the Life Sciences, Penn State University, State College, USA , Fan Song & Feng Yue Computational Biology Division, La Jolla Institute for Allergy and Immunology, San Diego, USA , Arya Kaul & Ferhat Ay University of Massachusetts Medical School, Worcester, USA Ye Zhan Program in Computational Biology and Bioinformatics, Yale University, New Haven, USA Department of Computer Science, Stanford University, Stanford, USA Department of Statistics, Penn State University, State College, USA Qunhua Li Department of Biochemistry & Molecular Biology, College of Medicine, Penn State University, State College, USA Feng Yue Howard Hughes Medical Institute, Chevy Chase, USA Job Dekker Computer Science Department, Johns Hopkins University, Baltimore, USA Search for Galip Gürkan Yardımcı in: Search for Hakan Ozadam in: Search for Michael E. G. Sauria in: Search for Oana Ursu in: Search for Koon-Kiu Yan in: Search for Tao Yang in: Search for Arya Kaul in: Search for Bryan R. Lajoie in: Search for Fan Song in: Search for Ye Zhan in: Search for Ferhat Ay in: Search for Mark Gerstein in: Search for Anshul Kundaje in: Search for Qunhua Li in: Search for James Taylor in: Search for Feng Yue in: Search for Job Dekker in: Search for William S. Noble in: GGY, JD, and WSN designed the experiments. YZ and BRJ generated and processed the data. GGY, HO, MEGS, OU, KKY, TY, AC, AK, and FA ran the experiments. GGY and WSN analyzed the results. GGY, OU, and WSN made the figures. OU wrote the 3DChromatin_ReplicateQC software, with input from MEGS, KKY, and TY. All authors contributed to the preparation of the manuscript. All authors read and approved the final manuscript. Correspondence to Job Dekker or William S. Noble. Supplementary figures and tables describing additional results and the datasets used in this study, respectively. (DOCX 5 kb) Yardımcı, G.G., Ozadam, H., Sauria, M.E.G. et al. Measuring the reproducibility and quality of Hi-C data. Genome Biol 20, 57 (2019) doi:10.1186/s13059-019-1658-7 Submission enquiries: [email protected]
CommonCrawl
"Soft" policing at hot spots—do police community support officers work? A randomized controlled trial Barak Ariel1,2, Cristobal Weinborn1 & Lawrence W Sherman1,3 Journal of Experimental Criminology volume 12, pages 277–317 (2016)Cite this article To determine whether crime-reduction effects of increased police patrols in hot spots are dependent on the "hard" threat of immediate physical arrest, or whether "soft" patrols by civilian (but uniformed) police staff with few arrest powers and no weapons can also reduce crime. We also sought to assess whether the number of discrete patrol visits to a hot spot was more or less important than the total minutes of police presence across all visits, and whether effects based on counts of crime would be consistent with effects on a Crime Harm Index outcome. We randomly assigned 72 hot spots into 34 treatment units and 38 controls. Treatment consisted of increases in foot patrol by uniformed, unarmed, Police Community Support Officers (PCSOs) who carry no weapons and hold few arrest powers beyond those of ordinary citizens. GPS-trackers on every PCSO and Constable in the city yielded precise measurements of all patrol time in all hot spots. Standardized mean differences (Cohen's d), OLS regression model, and Weighted Displacement Quotient are used to assess main effects, to model the interaction effect of GPS data with treatment, and to measure the diffusion-of-benefits of the intervention, respectively. Outcomes included counts of incidents as well as the Cambridge Crime Harm Index. As intended, patrol visits and minutes by Police Constables were equal across the treatment and control groups. The sole difference in policing between the treatments groups was in visits to the hot spots by PCSOs, in both the mean daily frequency of discrete visits (T = 4.65, C = 2.66; p ≤ .001) and total minutes across all visits (T = 37.41, C = 15.92; p ≤ .001), approximately two more ten-minute visits per day in treatment than in control. Main effect estimates suggest 39 % less crime by difference-in-difference analysis of reported crimes compared to control conditions, and 20 % reductions in emergency calls-for-service compared to controls. Crime in surrounding areas showed a diffusion of benefits rather than displacement for treatment hot spots compared to controls. A "Reiss's Reward" effect was observed, with more proactive patrols predicting less crime across treatment hot spots, while more reactive PCSO time predicted more crime across control hot spots. Crime Harm Index estimates of the seriousness value of crime prevented ranged from 85 to 360 potential days of imprisonment in each treatment group hot spot (relative to controls) by a mean difference of 21 more minutes of PCSO patrol per day, for a potential return on investment of up to 26 to 1. A crime reduction effect of extra patrols in hot spots is not conditional on "hard" police power. Even small differences in foot patrols showing the "soft power" of unarmed paraprofessionals, holding constant vehicular patrols by Police Constables, were causally linked to both lower counts of crimes and a substantially lower crime harm index score. Correlational evidence within the treatment group suggests that greater frequency of discrete PCSO visits may yield more crime reduction benefit than greater duration of those visits, but RCTs are needed for better evidence on this crucial issue. Working on a manuscript? Avoid the common mistakes Hot spots of crime and disorder have received much attention in recent years. An abundance of rigorous evidence converges on two lines of research. First, predictive and diagnostic approaches show that crime is disproportionately concentrated into a "power few" "micro" areas of land afflicted by a disproportional number of antisocial events (Pierce et al. 1988; Sherman 1987; Sherman 1995; Sherman et al. 1989; Weisburd 2015; Weisburd et al. 2004). These small pieces of land—street segments, intersections, city blocks or unique addresses—account for much of the crime in any city described in published research to date. This phenomenon has led to the discovery of what is called a "law of concentration of crime in place" (Weisburd 2015; Weisburd & Amram 2014; Weisburd et al. 2010, p. 16), or what might be termed the criminal careers of places (Sherman 1993, 2007; see also Sherman et al. 1989). The second area of work tests the preventive effect of police presence at these hot spots. Repeated randomized trials show that targeted increases in police patrol deployment reduce recorded crime in the targeted areas, compared to control areas, both patrolling in marked police cars (Sherman and Weisburd 1995; Telep et al. 2012) and on foot (Ratcliffe et al. 2011). Systematic reviews of the evidence on hot spots policing experiments suggest that the benefits associated with it exceed the costs (Braga et al. 1999, 2012), without much evidence of spatial displacement to adjacent areas in the vicinity of the targeted hot spots (Bowers et al. 2011; Weisburd et al. 2006). The success of hot spot policing in reducing crime is generally attributed to deterrence theory (Nagin 2013a; Sherman and Weisburd 1995; Sherman et al. 2014). The efficacy of deterrence has been argued to be borne out of the perceived likelihood of apprehension (Nagin 2013a; Weisburd et al. 2013b): uniformed power-holders who can make arrests for criminal transgressions that they detect cause rational actors to become substantially less likely to commit crime. Police powers of arrest and legal authority to apply necessary force create a threat to anyone contemplating a crime. This theoretical perspective has gone generally uncontested. Police officers and scholars equally assume that the threat of immediate incarceration, or at least interdiction, deters offenders. As Durlauf and Nagin write, "for criminal decisions, what matters is the subjective probability a potential criminal assigns to apprehension" (Durlauf and Nagin 2011; see also Groff et al. 2014). Yet there has been little attention to operationalizing subjective threat in experimental work. Nagin (2013a, b), Loughran et al. (2012b), and more recently Nagin et al. (2015) have modeled the necessary conditions in which deterrence exerts an effect on decision-making. Yet how much of a threat, with what immediacy and what kind of social control has generally been untested. These theoretical questions have practical meaning. Think, for instance, about the common view that crime is most effectively deterred by police officers who carry firearms, as distinct from unarmed security guards—or even from police who routinely patrol without firearms, as in the United Kingdom. This maxim of deterrence through superior force implies that police officers must apply a direct threat of total intervention, including immediate death, in order to create a localized general deterrent effect. Yet so far, there is no experimental evidence to address that claim. In this paper, we try to address that shortcoming. We test a form of "soft" policing (see Burke 2004) on crime and disorder in an English city, with a paraprofessional police role first proposed in the US (President's Commission 1967) and later adopted by police forces in England and Wales, which employ some 15,000 Police Community Support Officers (PCSOs) to complement some 110,000 Police Constables. Police community service officers The option of "soft policing" was enshrined in law with the introduction of "police community support officers" (PCSOs) through the Police Reform Act for England and Wales of 2002. From that date, PCSOs have been entitled to wear uniforms that look very much like that of police constables, but with far less powers than constables may exercise (Johnston 2006). PCSOs are not police officers; "they are civilian members of police staff." They work alongside their "warranted" police officer colleagues (who carry a "warrant card" that empowers to make arrests) "to provide a highly visible, accessible and familiar presence" (www.met.police.uk/pcso, downloaded 5th February 2016). Warranted police officers (also called Constables) have powers of arrest and are trained in first aid. PCSOs are non-warranted, and are barred from investigating crimes. They cannot carry firearms or any other weapons. They do, however, have specific powers to deal with specific minor offenses, such as ordering public beggars to desist and confiscating tobacco from persons under 16. They can even (in a quasi-judicial role) issue on-the-street fixed penalty notices requiring people to pay a fine for an offense the PCSO has witnessed (for example, cycling on the pavement, dog fouling, littering, and graffiti). Thus, their neighborhood policing role often amounts to increasing reassurance and visibility (see Innes 2005). They are also involved in administrative roles under direction by warranted officers, in such operations as seizing illegal narcotics or collecting CCTV evidence. While our use of the term "soft" to describe the power of PCSOs does not conform exactly to the meaning suggested by Burke (2004), our use conveys the actual extent of power—if not a style of persuasion—that PCSOs have relative to Constables. The powers of Constables substantially exceed those of PCSOs, with full arrest powers and weaponry (including nightsticks and tasers). PCSOs, in contrast, lack the most classic police powers beyond their policing insignia. They do not carry any type of weapon or handcuffs, not even a nightstick ("billyclub"). They cannot make arrests, conduct stop-and-frisks, or (in most kinds of cases) detain suspects or use any form of physical force against suspects of crime. Their tasks focus on providing reassurance, visibility and to serve as a link between the police and the community. According to the Cambridgeshire Constabulary website, PCSOs "carry radios to enable them to call for assistance, should it be required…wear protective vests, but [do not] carry other personal protection equipment such as CS spray or batons…PCSOs [in Peterborough] have blue bands around their hats, blue ties, and blue epaulettes on their shoulders. On the back of their coat or jacket, it says POLICE COMMUNITY SUPPORT OFFICER. All PCSOs carry personal identification," although notably these are not police warrant cards (https://www.cambs.police.uk/recruitment/pcso/faq.asp). Yet, given budgetary constraints in British policing, PCSOs are, for the most part, the only officers who conduct proactive and visible foot patrol—including in hot spots of crime and disorder. And at a distance, as Photo 1 shows, the appearance of a PCSO is extremely similar to that of a Police Constable. Police Community Support Officer (left) and Police Constable (right) Thus, this new "softer" police role provides an opportunity to test the lower limits of deterrence through reduced intrusiveness of legal powers and threats of summary use of force. They allow us to ask, experimentally, whether effective guardianship against crime and disorder in hot spots can be achieved with almost nothing but signals of social control, without hard power to implement control immediately. Our study looked at the deterrent effect of these community-support officers at the most chronic, persistent and "hottest" hot spots of crime and disorder in Peterborough, a medium-sized English city in Cambridgeshire. Our randomized controlled trial assigned extra PCSO patrols to about half of the population of the 72 hottest hot spots, while the other half served as control hot spots. The extra PCSO patrol tasks were comprised of community engagements and visible foot patrols for 15 minutes, 3 times per day, during the hottest hours in terms of crime frequency, over a period of 12 months. Our outcome measures included the numbers of (1) crime reports, (2) calls for service, (3) assaults against officers, and (4) harm measures. We paid particular attention to the possibility of spatial displacement of crime to areas around these hot spots. To refine our test of the "soft power" hypothesis, we created an unusually precise measure of the "dosage" (or minutes) of police patrol presence, by category of police officer (PCSO vs. Police Constable). While the original hot spots patrol experiment (Sherman and Weisburd 1995) used 18 graduate students with stopwatches and clipboards to measure how much police presence each hot spot received, most hot spots experiments since then (listed in Braga et al. 2012) had no ongoing tracking of police patrol time. Our quest for precision was especially necessary so that we could distinguish a difference in PCSO time from any difference (or similarity) in Police Constable time. We were able to take advantage of recent advancements in GPS technologies, using them in ways that British police (or police elsewhere) had never used them before. With a GPS transponder in every police officer's body-worn radio, we were able to track the dosage of police presence by type of police officer in each of the 72 hot spots. We measured the precise number of minutes that every officer's radio was located within each of the hot spots, in both treatment and control conditions, as well as the numbers of discrete visits (defined as arrivals and departures). We were also able to use these measures to distinguish "soft" policing time of the PCSOs from the "hard" policing time of the Police Constables. This paper begins with a review of the available evidence on hot spots policing, particularly findings from field experiments. Despite the impressive growth of this line of research since the original hot spots policing experiment (Sherman and Weisburd 1995), there are nevertheless major pieces missing from the puzzle—for instance, the police engagement necessary to exert a deterrent threat, which is the focus of our experiment. There is a substantial body of research on policing hot spots more broadly, which we touch on later, yet we found very limited direct investigations of what officers do to police hot spots. We conclude our literature review by looking at previous studies that called for a more granular analysis of deterrence theory and its application for the criminology of places, with specific attention to the work of Nagin (2013a, b). We then move on to describe our experiment with the Cambridgeshire Constabulary. We describe our design, our measures, the random assignment procedure and partial blinding, the interventions through the PCSOs and a description of the statistical procedures we used to analyze the results. We then discuss the findings and their implications for both theory and police practices. Effects of more policing at hot spots of crime and disorder "Hot spot policing"—once crudely described as a tactic of placing "cops on dots"—has been tested in dozens of rigorous tests. A recent Campbell Collaboration systematic review showed that most tests of hot spot policing were associated with significant reductions in crime in treatment hot spots compared to control conditions (Braga and Clarke 2014; Braga et al. 2012). The list of hot spots experiments is continuously growing (e.g., Ratcliffe et al. 2011; Rosenfeld et al. 2014; Telep et al. 2012). Taken together, it reflects a "strong body of evidence [which] suggests that taking a focused geographic approach to crime problems can increase the effectiveness of policing" (Skogan and Frydl 2004, p. 247). There is also evidence to suggest that successful hot spot policing of crime does not displace crimes to adjacent areas in the vicinity of the targeted hot spots (Bowers et al. 2011; Weisburd et al. 2006). Instead, there seems to be diffusion of benefits of these social control mechanisms to surrounding areas (Clarke and Weisburd 1994), or "radiation" of the treatment effect (Ariel 2014), not only "around the corner" from the targeted hot spots (Weisburd et al. 2006) but also to larger geographic areas (Telep et al. 2014). Police engagement tactics in hot spots of crime and disorder Thus, the evidence on hot spot policing is clear: when police officers target hot spots, they are able to reduce crime and disorder compared to control conditions. Directing the police to micro-places so that officers will apply social control consistently prevents crime. Despite this robust conclusion, it is still an open question as to what is the best tactical approach to policing hot spots. Put differently, what dimensions of police engagement work best, and under what conditions, in preventing crime at hot spots? There is an increasing focus in the literature on determining "the optimal strategies" of hot spot policing (Koper 2014). Some recent studies continue to reaffirm Sherman and Weisburd's (1995) original finding based on pure saturation, with no effort to structure what police actually do in hot spots (Telep et al. 2012). Yet others have begun to look more closely at precisely what specific aspects of police presence may more effectively prevent crime than others. For example, some have looked at problem-oriented policing (e.g., Braga and Bond 2008; Braga et al. 1999; Taylor et al. 2010; Weisburd and Green 1995), drug enforcement operations (e.g., Weisburd and Green 1994, 1995), increased gun searches and seizures (e.g., Sherman and Rogan 1995a, b), foot patrols (e.g., Ratcliffe et al. 2011), crackdowns (Sherman and Rogan 1995a), "zero-tolerance" policing or "broken windows tactics" (Caeti 1999; Weisburd et al. 2011), and intensified engagement (Rosenfeld et al. 2014). Yet few of these studies provide detailed measures of exactly what police were doing in the experimental hot spots; even one that did (Sherman and Rogan 1995a, b) failed to measure what police did in the control hot spot. The deterrent effect of policing Despite the undocumented treatment content and its variations in previous hot spots experiments, there are nevertheless common attributes to all hot spot policing approaches. First, it seems that the police must be focused on these micro-places of crime and disorder. In all studies of police initiatives that target hot spots with high spatial concentrations of events, officers have consciously focused both resources and efforts on these places. Once officers are tasked with applying any sort of intervention, crime generally goes down compared to hot spots not exposed to these focused treatments. We do not have a strong indication of which approach works "better" or in a "more" cost-effective way compared to other approaches (cf. Taylor et al. 2010), but the overall direction of virtually all hot spots studies suggests reductions in crime following focused engagement by police in the hot spots (Braga et al. 2012). The second, and crucial, common theme is a clearly stated mission for police to serve as "sentinels" deterring crime in the hot spots (Nagin 2013a, p. 10). As opposed to police in hot spots acting as apprehension agents (incidentally, apprehension risk is probably not materially increased by improved investigations; Nagin 2013b, p. 89; see also Braga et al. 2011), hot spots patrol officers are trained to serve as "crime preventers" when they are visible to the public. This view of police officers as predominately guardians was anticipated in Cohen and Felson's (1979) routine activities theory: police in their role as sentinels act as guardians who reduce opportunities for committing a crime. A drug store with a police officer standing outside it, for example, is not as attractive a criminal target as one with no police officer nearby. Opportunity theories of crime, such as routine activity (Cohen and Felson 1979; Cornish and Clarke 1986), rational choice (Clarke and Felson, 1993), and crime pattern theory (Brantingham and Brantingham 1993a, b), have often been used to understand the place characteristics, situations, and dynamics that cause criminal events to concentrate at particular places. As the authors suggest, the increased presence of police augments the level of guardianship in targeted places. Heightened levels of patrol prevent crimes by introducing the watchful eye of the police as a guardian to protect potential victims from potential offenders. Even when officers are tasked to problem-solve, or engage the public through neighborhood policing, officers are nevertheless uniform-wearing, (often gun-carrying) power-holders who communicate the authority of the state by their presence. This quality, which is embodied through police insignia, contains a literal threat of apprehension which sends an unequivocal message. No matter the tactic applied, the presence of officers intensifies the cognitive perception of plausible apprehension for any transgression of the law. Even "softer" police approaches like community policing still contain an ingredient of deterrence, at the very least for the duration of officers' physical presence within the hot spots. To be sure, this presumption of effective threat is not just theoretical; based on interviews with 589 arrestees in New York City following the police's quality of life initiatives, "the most important factor" behind behavioral changes—that is, reductions in the likelihood of committing crime and disorder—was police presence (Golub et al. 2003, p. 690). Wright and Decker (1994) reported similar results: offenders appear to be aware of police presence when they select their targets; they avoid neighborhoods with increased police presence when making a decision to commit robbery. With this in mind, we were particularly drawn to the question: how much of a deterrence threat is needed in order to materially motivate offenders away from committing crime? Answering this question is critical, not just for administrative criminologists who investigate law enforcement, but also for theorists who look at choices, risk perceptions and opportunity theories. Risk perceptions are perceptions of the likelihood and controllability of the event, as well as perceptions of the impact of the event if it were to occur (see review in Jackson and Kuha 2015, p. 10). Constructing these risk probabilities requires some level of rational thinking—even though the decision to commit crime is often described as irrational and suboptimal (Matsueda 2013; see also Cornish and Clarke 1986). Still, these perceptions are particularly pertinent to deterrence theory, which finds many examples of deterrent effects, on average, across large groups. There are clear individual differences in deterrability, perhaps based on differences in perceptions of the risks of getting caught. For example, experienced offenders seem to place relatively more weight on their prior subjective probabilities, unlike inexperienced offenders (Nagin 2013b, p. 94). The basic need to believe that things are stable, certain and predictable also varies between individuals (Kruglanski and Webster 1996). Within this framework, the decision to commit crime is heavily affected by one prominent factor: the individual's perceived risk of apprehension (Loughran et al. 2011; Nagin 1998, 2013a). Perception of this risk was found to be highly influenced by proximate variables, including objective sanction risks (Apel 2013), and among these objective risks, police presence is an important ecological cue that inhibits criminal conduct (Golub et al. 2003; Sherman 1990). But we remain entirely unclear about how much the presence of police doing what, has how much effect, for how long. Quantifying the certainty of apprehension At a high level of abstraction, there is ample evidence that the perceived certainty of punishment is causally associated with less crime (Bushway and Reuter 2008; Cullen et al. 2008; Lochner 2003; Loughran et al. 2012a; McCarthy 2002; Paternoster 2010; but cf. Berk and MacDonald 2010; Tonry 2008). More than severity of sanctions and likely to be more than celerity of sanctions (Nagin 2013a; Von Hirsch et al. 1999), increasing the likelihood of being caught is inversely linked to the likelihood of committing an offense. This "certainty effect" carries wide probabilities, over a range of settings in which the criminal justice system attempts deterrence. That conclusion allows at least two possibilities. First, while a minimum threshold of a punishment certainty effect is required to deter crime, at some point any incremental addition of certainty will no longer enhance the effect. Certainty may come and go by an "on–off switch," rather than being a continuous linear increase or decrease in decibel level. Second, that on–off switch may be set at higher thresholds of certainty for some people than for others. On reflection, these thoughts may be unsurprising. As Loughran et al. (2012b, p. 714) explained, "certainty effects are predictably non-linear, [and] the prevailing detection probability before change occurs becomes a key moderator of certainty effects and a key consideration in policy formation." Loughran et al. (2012b) came the closest to empirically scrutinizing this effect, using subject-level data (as opposed to aggregated data); they show evidence of a "tipping effect", whereby perceived risk deters only when it reaches a certain threshold and a substantially accelerated deterrent effect occurs for individuals at the high end of the risk continuum. This is unsurprising, because the phenomenon is well acknowledged in psychology and in economics in the framework of "diminishing marginal sensitivity" (Stevens 1957; see also Chamlin 1991; Logan 1972; Tittle and Rowe 1974). It seems that "beyond [our emphasis] some point, any further increase in the perceived certainty of punishment is associated with only very small decreases in the mean number of crime" (Erickson et al. 1977, p. 311). Yet a diminished marginal sensitivity does not address the minimal degree under which the certainty effect takes place. It addresses the tipping point in punishment risk and a point along the punishment risk continuum where punishments no longer matter. We still do not know how much policing is needed to affect average perceptions of apprehension risk in communities of different crime rates. How much policing is "enough"? As far as we can tell, there are few studies that have explicitly and directly tested the dosages of risk thresholds and its effect on the decision to commit crime (but cf. Braga and Bond 2008; Koper 1995). Even the broader literature on how much police presence affects perceptions of apprehension risk is scant (see Nagin 2013b; Wikström et al. 2011). Differentiating between absolute and marginal deterrence is difficult (Nagin 1998, p. 53). Police may often create only marginal impacts from incremental policy changes. As we reviewed earlier, the evidence does seem to suggest strongly that more police presence reduces crime, but the dosage–response curve question and its effect on risk perception is largely missing (but see Koper 1995). There is some research which suggests that when police are abruptly not present, crime rates increase (Andenæs 1974; Deangelo and Hansen 2014; Di Tella and Schargrodsky 2004; Heaton 2010; Sherman and Eck 2002; Shi 2009). These studies show that sharp decreases in police presence and activity substantially increase crime—which essentially supports the basic argument that risk perceptions are affected by the presence of police officers. However, our ability to measure incremental differences in dosage and its link to crime reductions has been crude. In fact, the term "dosage" itself is unclear, to the extent that it could operationally reflect (1) the "amount" of police presence (for instance, the number of minutes), or (2) the number of visits (such as the number of patrols), or (3) the average number of officers physically present at any given time to send such a deterrent message. Police in Paris, for example, perform patrols in groups of four to eight officers; San Diego (CA) police patrol in one-officer cars. Does this difference matter? Contextualizing sanction risks as "threats" Our interest in the quantification of certainty effects goes beyond the dosage/measurement question and aims to look at a more profound dimension of deterrence theory. Because police provide a symbolic crystallization of power exercised over society, the "amount" of threat applied can vary in both form and shape, and not just in magnitude. The quantity of deterrence is not just about "how much" in observable units of time and number of visits to hot spots, but also in terms of "symbolic quantification" of power (see Butterworth 1999; Deheane 1997; Pierce 1885). To begin with, some research suggests that there is no relationship between the number of police officers per capita and perceptions of the risk of arrest (Kleck and Barnes 2010), thus suggesting that increases in police resources will not increase general deterrent effects, and that decreases will not reduce deterrent effects. It is not the aggregate sum of police visible that matters, but the ways in which they become visible—spatially, temporally and procedurally. The latter dimensions can potentially change perceptions, and ultimately the rate or seriousness of criminal behavior. The risk of apprehension by sentinels is firstly associated with the degree to which power-holders are perceived as capable agents of the law (Bottoms and Tankebe 2012; Tankebe 2013). If an offender holds the view that the officer would not act to apprehend him, it makes little difference how frequently and for how long the officer is present at the hot spot. We do not assume that offenders generally think police officers are ineffective, or "toothless" (Ariel 2012, p. 39)—at least in western democracies—as the evidence suggests otherwise (Braga et al. 2012), but it still begs the question, what is it about the "police officer" that elevates the risk of apprehension? As Wilson (1968) showed many years ago, and Smith (1986) confirmed, the odds of a police officer making an arrest after witnessing a crime vary widely across communities. Why, then, should we expect more policing to reduce crime in hot spots in all communities? One answer is a perceived (and often real) change in the base rate of punishment. Whatever the level of apprehension risk, the presence of the police provides a symbol that embodies the state. The "costs" of punishment associated with that symbol may make crime unattractive the more visible the symbol becomes: past experiences, vicarious experiences and collective memories "instruct" the offender to reevaluate the motivation to commit an offense when a capable guardian such as a police officer is present (see discussion in Apel and Nagin 2014; Devos 2014; Kleck 2014; Sherman 1993, pp. 468–9). We are particularly interested in the symbolic association of these costs, which are embodied through police officers. But still, what is it about "police officers" that sends out these cost messages? The literature we reviewed earlier on actual risk of punishment may miss our important point: the symbolic quantification that people ascribe to these costs. "If there is anything that is distinctively human," explains Hauser (2003, p. 566), "is our capacity to represent quantities with symbols, to use such symbols with abstract functions or operators…all cultures have a system of symbolic quantification[…]for distinguishing (minimally) one object from many." In this respect, then, what is it about the police officer's presence that captures these perceptions of risk? The uniform, shield, weapons, insignia? Take the case of firearms, for instance: in some cultures, it is a direct symbol of police authority, but not in others. Are firearms necessary—beyond personal protectionFootnote 1—if the symbol of legitimate power-holding is transpired through an insignia of "lesser" authority, such as PCSOs? For example, uniforms convey power and authority, as clothes carry a social significance (De Camargo 2012; Form and Stone 1955; Johnson 2001; Nickels 2008). So the certainty effect can therefore be exerted with the most minimal of "threat symbols"—as is the case in England, Wales, Scotland, Ireland, some states in India, New Zealand, Iceland, and other jurisdictions in which police officers do not routinely carry firearms on their person while on general duty.Footnote 2 These "lesser" symbols include badges, patches, headgears, and uniforms that identify authority. Anything beyond these might prove unnecessary to deter offenders from committing crime. US and Australian citizens, for example, may comfortably assume that it is both the badge and the weapons of control that are the universal symbols of authority and power (De Camargo 2012). But sanction risks may be perceived based solely on the badge, without any need for the bullets. The problem with these sets of questions is that we have had no systematic assessments to answer them. One cannot just turn to "arrest risks" as directly linked to symbolic quantification of sanction threats (see Bouchard and Tremblay 2005; Richards and Tittle 1982; Viscusi 1986). Arrest carries the symbolic embodiment of sanction threats, and we often assume that the arrest is intertwined with apprehension risks ("I do not want to get arrested"). But in the context of prevention and deterrence, why is the threat of immediate seizure or forcible restraint necessary at all? For the most part, this question remains untouched. Nagin (2013b, p. 85) addresses this to some extent, by distinguishing between formal and informal sanction costs, which are costs that are separate from those that attend the imposition of formal sanctions. Formal sanctions include "loss of freedom or fines," while informal sanctions "include censure by friends and family and loss of social and economic standing… [and] the magnitude of informal costs may be largely independent of the severity of legal consequences." Merely being arrested for committing a crime, then, may trigger the imposition of informal sanctions. Williams and Hawkins (1986) use the term "fear of arrest" to label the deterrent effect of informal sanction cost. But again, what part of "police presence" exacerbates these perceptions of risk? Is it the power to use force, or the visible firearm? The potential of immediate arrest? These questions lead us to wonder whether we can advance our theory of deterrence by "watering down" the threat of apprehension to its lowest threshold, with nothing but a sign of the state, stripped naked of weapons and most arrest powers. Would that symbolic representation of authority still effectively cause offenders to commit less crime with more police presence? On the one hand, that mere presence of police officers might be a necessary condition to effective deterrence. But if deterrence requires hard power, a merely symbolic presence might not be sufficient. The material capacity of power-holders to immediately apply incapacitation, when needed, may be required as well, in order for individuals to make the decision not to commit crime. If that is true, then officers must instil "fear of arrest", with some degree of actual threat of incarceration, in order for deterrence to exist. On the other hand, it is possible that most offenders are generally discouraged by even the most minimal threat of apprehension exerted by police officers who are "simply there" but do not need to apply their powers. If this is true, then deterrence is exerted through symbolic signals of authority, and the use of "hard" power may be excessive in relation to the objective of preventing crime. The Peterborough hot spots experiment Our objective in this study is to address the gaps in the literature in three major ways: treatment content, treatment measurement, and local crime displacement. First, by experimenting only with the allocation of PCSO patrol, we test a "softer" policing intervention aimed to reduce crime in hot spots: The only difference between the two randomly assigned groups of crime hot spots in our experiment is the amount of time spent by uniformed yet weapon-less foot-patrolling PCSOs with no powers of arrest. Second, by measuring both "soft" and "hard" police presence precisely and reliably, we contribute the first published hot spots policing experiment that tracks both foot and vehicle patrols with personally-issued GPS trackers (in body-worn radios). This technology of measurement allows us to present the most accurate (to date) treatment versus control comparisons of the independent variables of police patrols at hot spots. Third, we measure whether such policing causes spatial displacement to adjacent areas in relation to the known magnitudes of difference in the independent variables between treatment and control groups. Settings and design Peterborough is a city of 200,000 residents that occupies nearly 133 square miles (Peterborough City Council 2013). Located in the east of England within the county of Cambridgeshire, Peterborough is ranked as the 27th largest city in England and Wales. Once an industrial center, employment is currently diversified across many sectors. The city's demographics include 81 % Whites, 12 % Asian and approximately 3 % Blacks. The city experienced 6.85 crimes per 100 residents in 2013, while the UK mean is 6.57. Peterborough's police, comprising a division of Cambridgeshire Constabulary, employed approximately 60 PCSOs in the year of the experiment. Broadly speaking, PCSOs in Peterborough were assigned to neighborhood commanders. Interestingly, because of budget constraints in England and Wales forces, the primary utility of PCSOs in neighborhood policing is foot patrol, while most warranted Constables conduct vehicle-based patrols. Thus, the findings of this study have implications for the future of foot patrols more broadly in the UK, as well as in any other country that adopts a "paraprofessional" model for neighborhood patrols (Ratcliffe et al. 2009; Wain and Ariel 2014). In order to test the effectiveness of PCSOs, we were granted permission to work with the Constabulary's Information Management team to analyze crime data by location. We identified the 72 highest-crime hot spots in the city, and then (independently of any consultation with police) randomly assigned 38 to treatment conditions and 34 to control conditions. The precise locations of the treatment hot spots and their boundaries were communicated to local commanders directly: they were given their assigned hot spots and were informed that "these are the hottest hot spots" in the city. But in an effort to keep the control locations on a business-as-usual condition, we kept the police commanders as well as the PCSOs blind to the control locations. At no point during the experiment were they informed the location of the control hot spots, in order to avoid any possible bias or contamination—thus maintaining a "partially-blinded experimental design." Defining the hot spots There is a voluminous body of literature on how to map crime and disorder hot spots. This literature describes many different methods of drawing small areas of land that tend to have a disproportional concentration of crime and disorder. From a theoretical perspective, it seems that the size of hot spot does not change the overall pattern of a skewed concentration of events in these hotspots, of all events in the city (Eck et al. 2005; Hart and Zandbergen 2014). Whether GIS systems define a hot spot as a cluster, street segment, or the archaic method of creating arbitrary circles or grids of crime, usually less than 5 % of the land "produces" at least 50 % of crime and disorder. Different methods support the "law" of concentration of crime in place, which means that "crime hot spot maps can most effectively guide police action [as long as the] production of the maps is guided by crime theories (place, victim, street, or neighborhood)" (Weisburd 2015; Weisburd et al. 2012). However, there are practical implications for how a hot spot is defined. Our practical challenge was to define hot spots that made sense for defining foot patrol assignments. While we concur in principle with Weisburd et al. (2012) that defining hot spots as street segments (i.e., between two street intersections) is coherent and theoretically driven in a US grid system, we found that the approach is less applicable to the street topography and layout of ancient Cathedral cities in England and Wales, such as Peterborough. The streets in pre-streetcar era British cities (let alone the automobile) are not arranged at right-angles. They are, rather, a messy patchwork of short streets with long names. Using a street-segment approach to defining the hot spots produces a very small number of eligible hot spots, and the potential for local deterrence to materialize through a robust approach of "seeing and being seen" (Sherman and Weisburd 1995) was greatly minimized. Instead, we used 150-meter radius polygons, which allowed the officers some discretion about where to go within these constrained hot spots, but also embraced the location of a substantial number of crimes each year. There may also be additional social control benefits using this definition. Police officers are more likely to be seen by non-criminal elements in main streets around crime attractors, which are more susceptible to being crime hot spots (Ariel et al. 2014; Weisburd et al. 2006), and which may both reduce fear of crime and increase public confidence through visible policing (cf. Sosinski et al. 2015). In order to measure possible displacement effects, we supplemented the 72 hot spot polygons with a 50-meter zone beyond the hot spot boundaries. This plan was essential for our research objectives, despite the available research evidence against local displacement and in favor of diffusion of benefits to adjacent areas around the hot spots (Bowers et al. 2011; Weisburd et al. 2006; but cf. Sorg et al. 2014).Footnote 3 The officers were instructed to patrol the hot spots, not the buffers, but whatever the treatment effect would be, it was measured in these buffer zones as well as within the hot spots. In order to insure statistical independence of the units of analysis, we created a third zone beyond the buffer zones for each hot spot of at least 100 meters. This third area served as an additional cushion so that no two hot spots would come any closer than the combined width of their cushions. This criterion helped avoid a situation in which one treatment and one control hot spot, for instance, are so close to each other that the treatment effect could spill over into the control area. We thereby reduced the risk of violating the Stable Unit Value Transfer Assumption (SUTVA) that the effect of the treatment condition on each unit (treatment or control) is independent of the effects of treatment on any other units (Sampson 2010). This necessary procedure, unfortunately, greatly reduced the number of eligible hot spots. Yet the gains for internal validity of the design outweighed any loss of statistical power from a smaller sample. We defined a "hot spot" in this experiment as an area of land (150 m radius) with no less than 36 calls for service in both the 12 months before the experiment was conducted and in a further 12 months before the experiment (or the 24th to 13th months before the experiment, as well as the previous 12 months). The two-year baseline procedure followed the method used by Sherman and Weisburd (1995) in the first hot spot policing experiment, in order to insure stability of the hot spot locations during the experiment with hot spots that were persistent and chronic (Weisburd et al. 2004). The types of incidents included in the definition of the hot spots were calls about any street crime categories that visible officers would be able to deter in the public domain, such as antisocial behavior (49 % of all incidents), robberies, violence, vehicle theft, and graffiti—but not predominantly indoor crimes such as domestic disturbances. Random assignment and partial blinding The exclusion criteria we used created a list of all 72 eligible hot spots across the entire city. We conducted pure simple random assignment, which created a 53 % treatment and 47 % control group split of the full population of eligible hot spots. As noted earlier, stations and district commanders were not given the full list of hot spots; local police forces were informed of the location of their treatment hot spots, but they were blinded about the location of the control hot spots. This blinding process decreased the chance of contamination and police-initiated SUTVA violations (see Sampson 2010). The number of treatment hot spots (n = 34) also fitted the operational availability of patrol officers seconded to the experiment. Treatment conditions There were three major characteristics of the treatment in this experiment. First, the treatment was delivered by uniformed but civilian police staff with no weapons or arrest powers who were tasked to "be visible" to deter crime and antisocial behavior in hot spots. They patrolled alone, on foot, out of sight of other police staff, with only a radio to connect them to their colleagues. Second, their treatment was targeted on the "hot hours" and "hot days" for Peterborough. Based on the 24-month baseline analysis, the temporal crime peaks were on Tuesdays through Saturdays, between 1500 and 2200 hours. These were the patrol hours for the PCSOs, within which temporal boundaries we measured the treatment effect. At any given moment during these hours and days of the week over the 12 months of the experiment, there were about a dozen PCSOs on the street conducting these patrols. Given the resources constraints and the distance between the hot spots, each hot spot was assigned to receive three separate one-PCSO patrol visit per day, or 780 visits per hot spot over 12 months. Each visit was required to last 15 minutes, based on prior evidence about how to maximize residual deterrent effects after PCSOs left the scene as demonstrated by the Koper Curve (Koper 1995; Telep et al. 2012). Throughout the experimental period, the assignment of PCSOs to the hot spots was on a rotating basis, so that PCSOs patrolled different combinations of the 34 treatment hot spots rather than being limited to a few hot spots for all patrols. Third, the delivery of the treatment was tracked (Sherman 2013; Wain and Ariel 2014) on a daily basis, although it was not fed back to either the Sergeants or the PCSOs on patrol (see Sherman et al. 2014). As we reviewed above, recent hot spot studies have now begun to measure more systematically what the officers do rather than simply measure when and where they were deployed. Note, however, that even with the recent evidence there is a lack of precision in measurement of both the time and the location of the officers, as personal GPS locators have not been used thus far (see below). The Rosenfeld et al. (2014) study is the only study that reports a sufficiently accurate account of outputs (e.g., arrests, stop and account). Systematic social observations (see Sampson and Raudenbush 1999) would have been the appropriate approach here, but the cost of that method exceeded our budget. Instead, we provide the following qualitative account of the treatment tactics and training. PCSO activities in hot spots The PCSOs were told to concentrate on being visible, to the exclusion of any other task. They were not tasked to problem-solve (Goldstein 1979) or to conduct community policing in the classic sense (e.g., Skogan and Hartnett 1997) or to conduct patrols targeted on any particular social or crime problem (e.g., McGarrell et al. 2002; Sherman and Rogan 1995a, b). The aim was to deter crime through their signals of police authority: the symbolic representation of power which the PCSOs carried, with uniforms and a two-way radio. We told them just what our theory was: that potential offenders would be discouraged by even the most minimal threat of apprehension, exerted by police staff who are "simply there" but do not need to (and cannot in any case) apply any level of force beyond a citizen's arrest. Practically speaking, there is very little else that can be achieved by way of active engagement in the 15-minute patrols allocated to each hot spot. Each officer was accountable for 3–4 hot spots per evening. The walking distance between the hot spots, often a mile apart, made the experiment operationally challenging. The PCSOs repeatedly told the research team that they were pressured to "beat the clock" in rushing between hot spots. This is not to say that officers were instructed not to deal with events as they occured in the hot spots. If members of the public required assistance, or if the PCSOs encountered crime or disorder, they were still required to report the event to the police. However, in terms of ordinary "allocated time" (see Weisburd et al. 2015) during the experiment, these officers focused on a theoretically preventive, "saturated" presence compared to control conditions. The no-treatment hot spots were not exposed to these proactively directed patrols. Nevertheless, the control areas were also visited frequently by police constables or PCSOs sent to them reactively after citizen calls to police about crime and disorder. We were able to fully measure the time and location of all these visits, regardless of whether they were reactively or proactively mobilized (Reiss 1971), for both treatment and control conditions, and for PCSOs as well as "regular" cops (see below). In fact, both treatment and control hot spots continued to be visited by Police Constable "response officers" both reactively and proactively (e.g., stop-and-search, problem-solving or crackdowns). The crucial distinction between the treatment groups was that control hot spots were not exposed to a prescribed daily level of proactive patrols, five days a week, in fixed hours. Control conditions Due to recent budget cuts, relatively little proactive directed patrols by PCSOs were deployed across the city in the period before and during the experiment. Both PCSO and Constable police officers were engaged primarily in reactively responding to emergency calls for service, with very little targeted foot patrol anywhere, with the exception of the treatment group. Our decision to withhold the locations of the control hot spots did not free them up to receive proactive patrol. Rather, it avoided raising the question of whether they should receive some of the patrols that were assigned to the treatment group. This experiment can therefore be characterized as a randomized comparison between PCSOs providing proactive and focused foot patrols (as treatment) versus both Constables and PCSOs providing reactive policing (as control). Dependent variables We used two primary outcome measures to assess the treatment effect: the number of calls-for-service to the police ("999 calls") and the number of victim-generated crimes within the participating hot spots, 24 months prior to the experiment, and then again during the 12 months of the experiment. We compared changes in these two outcomes between the periods before and after the beginning of the trial, and then compared this difference among the two study groups (treatment and control conditions). Crime reports proactively generated by police activity rather than by victim or witness reports, that is, incidents that are essentially outputs rather than treatment outcomes, such as drug offenses, stop-and-searches, were clearly marked in police records; we excluded them from the dependent variable measures (see Sherman and Weisburd 1995). A third outcome measure is a transformation of the crime count. Rather than just treating all crime types as equal, we also applied the Cambridge Crime Harm Index (CHI) to the types of crime reported in the hot spots (Sherman 2007, 2013; Sherman et al. 2016). This procedure applies the "starting point" for sentencing persons convicted of each type of crime, without taking into account the defendant's prior criminal history or circumstantial mitigating or aggravating factors. Measured in number of days of imprisonment recommended, the Cambridge CHI provides a "bottom line" for any mixture of crimes of different types and frequency within type, across offenders, victims, times or places. It is ideally suited for comparing benefits of patrol time in practical terms, supplementing the more esoteric concept of "standardized mean differences", which have little face value for public policy. For other applications of CHI in policing, see Bland and Ariel (2015). GPS tracking data As far as we can tell, this study is the first to report dosage measures in both treatment and control hot spots, of all police officers in the city, down to the second, derived from body-worn GPS trackers (not Automatic Vehicle Location transponders in police cars). Every police two-way radio in the city was equipped with a global positioning system that tracked the movement of the officer, at every given moment on duty. The only theoretical exception would occur if the officer removed the radio attached to their uniform and walked over 50 meters away from it; this unlikely scenario was never observed, primarily because it is a safety issue for officers on duty to risk losing contact with the central communications team. The GPS trackers inside each two-way short-wave radio set (see Photo 2), in fact, were originally installed to provide personal security of officers, and only subsequently to allocate officers in real-time settings to calls for service. Police radio with embedded GPS tracker GPS technology can therefore be used to measure how much time officers spend in particular areas, and how many visits, defined as follows. Every radio tracker was set to transmit to the satellite a "ping" with the spatiotemporal coordinates (latitude, longitude and a timestamp) of the tracker (see Wain and Ariel 2014). For the purposes of the experiment, the system was set to ping every 1 minute. Each visit to a hot spot was therefore defined as an uninterrupted period between the first ping inside the boundaries of a hot spot and the first ping thereafter locating the officer outside those boundaries, within the limits of the frequency of readings. The boundaries were defined by our programming of the GPS back-office systems to "geo-fence" our targeted 72 areas of land. By counting how many pings the trackers send from within these geo-fenced areas, we were able to measure with high precision and accuracy how many minutes and how many visits each officer made to these geo-fenced areas.Footnote 4 This "point in polygon" analysis was applied for all included hot spots. Measures of displacement Despite the accumulated evidence against spatial displacement to areas around the hot spots, it was nevertheless important to measure it in the current settings. We measured the number of victim-generated crimes that took place within the 150-meter-radius hot spots (both treatment and control conditions), and then the number of crimes in the catchment zones of 50 meters radii around the circumferences of the hot spots. As described more fully below, we used Guerette's (2009; but see Bowers and Johnson 2003) Weighted Displacement Quotient (WDQ) to determine displacement or diffusion effects as a result of the RCT. WDQ determines the presence of displacement or diffusion in relation to changes in the treatment and control areas. Statistical procedures First, we estimated the magnitude of any treatment effect. We started by calculating the raw counts of incidents (calls for service) and crimes within the hot spots, before and during the intervention. We then calculated the difference-in-differences in means of these measures between the treatment and control groups. To estimate any treatment effect, we computed the standardized mean difference (Cohen's d; Cohen 1988). These procedures allowed us to compare any intervention effect in the present study with the available hot spots research summarized in Braga et al.'s (2012) systematic review. Second, we estimated the magnitude of any treatment differences. We calculated the raw GPS data for the number of patrol visits per day and the number of minutes spent in the hot spots, for both the PCSOs and the warranted constables. Because we had no such measures before the experiment began, we are limited to comparisons during only the 12 months of the experiment. We compared mean values by treatment group for magnitude of differences and conducted t tests in order to assess the statistical significance of any differences. We then estimated the cross-sectional observational model of the experiment across all 72 hot spots when taking into account their individual police presence data. We used a linear model to assess these differences between experimental and control groups in terms of calls for service, and then in terms of crime. Group assignment ("experimental" /"control") was the main predictor, with the addition of interaction terms between treatment group status and treatment delivered as measured by the GPS data. This model incorporates the time PCSOs spent in the hot spots and the number of visits to the hot spots and the interaction terms with the treatment condition, while controlling for the visits of the Police Constables to all 72 hot spots. We present the unstandardized and standardized coefficient models, and use the estimated marginal means (for more on marginal means, see McCulloch et al. 2008), in order to report the mean interaction responses, adjusted for all other covariates in the model. We plotted these responses on a scatterplot, and drew logarithmic trend lines for each group (treatment vs. control). These analyses may provide a novel approach to estimating the aggregation of the gains with each additional unit of patrol time. Fourth, we used the WDQ to measure the displacement effect associated with treatment condition. We measured both the gross effect and the net effect for the experiment. A positive outcome in the gross effect indicates there was a decrease in crime just within the target area, without reference to any displacement. A positive outcome in the net effect indicates a decrease in crime in the target area that was greater than or different from changes in the control area. When the WDQ measure of the net effect is a negative number, it indicates there was a displacement effect; a positive number for the WDQ indicates a net crime reduction across the target hot spot and its (theoretical) local displacement measurement catchment zone. Finally, we estimated the overall impact of the treatment group condition by using Guerette's (2009) Total Net Effects (TNE) model. The TNE gives the overall effect of the differences between the two randomly-assigned groups of hot spots by taking into account the variation in crime figures in the treatment areas compared to the control areas, while controlling for the displacement effect, if it exists. TNE is particularly valuable to assess what is the "bottom line" reductions in crime attributed to any intervention, such as the proactively assigned PCSO patrols in the 34 treatment group hot spots. Statistical power is defined by Cohen (1988) as the probability of detecting a statistically significant difference in a comparison of two groups when such a difference truly exists. The study population of 72 hot spots used in this experiment offered us a sufficient level of statistical power to detect a true effect as unlikely to be due to chance. By using a statistical analysis package called Optimal Design (Spybrook et al. 2013), we estimated that the population of 72 hot spots was large enough to detect small to medium effects as significant if the cutoff point was set at .1 (a 10 % risk that the finding is due to chance), as long as the hypothesis we assumed was one-directional and the estimated power was 0.80. The power level of .80, which means that there is an 80 % chance that the test will detect true effects as within the significance cut-off level, is the conventional measure used in social science, although it is no more absolute than any other arbitrary threshold. We selected these parameters for our analysis in reference to the existing literature on hot spots (Braga et al. 2012). Baseline sample characteristics and treatment group similarity The total number of incidents in the hot spots, broken down into calls for service (CFS) and crime counts, before and during the experimental period, are shown in Table 1. One year before the experiment began, the hot spots attracted 26,500 calls-for-service and 2,525 crimes. The majority (72.89 %) of the calls fell into the top five incident categories: antisocial behavior (40.71 %), suspicious circumstances (18.53 %), violence (7.37 %), burglary of dwelling (6.27 %) and criminal damage (5.56 %). The leading crime types, at 94.81 % of all crimes, comprised the top five major crime categories: theft (31.88 %), violence against the person (24.63 %),Footnote 5 criminal damage (22.34 %), burglary (14.65 %) and robbery (1.31 %). Table 1 Calls for service and crime data (baseline counts) The distribution of events at baseline values suggests that while the minimum requirement for being a hot spot was set as 36 incidents in a 12-month period, the mean number of CFS per hot spot at baseline period was 368.06 (SD = 252.39). The baseline frequency of calls per hot spot was asymmetrical (skewness = 1.291; SE = .283), with some eligible hot spots experiencing up to 619 incidents. The same pattern emerged for crime data, with the mean number of crimes per hot spot being 35.07 (SD = 24.54) at baseline, with a highly skewed distribution (1.263; SE = .283), ranging from 3 to 85 crimes. None of the pre-treatment between-groups comparisons were significant [t = .4521(70); p = .652 (CFS) and t = .4844(70), p = .630 (crime incidents)]. The GPS data for both PCSOs and Police Constables show high integrity in creating a big difference in PCSO patrol between treatment groups, while holding Police Constable patrol constant with no significant differences in PC measures between treatment groups (Table 2). This finding applies to both measures of police activity: the duration of patrol time in the hot spots (measured in minutes), and the number of discrete patrol visits per day (see definition above), for the year of the RCT. Table 2 GPS data: means, standard deviations and t test scores As shown in Table 2, the treatment group hot spots received 135 % more PCSO patrol time than the control group hot spots: a mean of 37 minutes of PCSO patrol per day per treatment group hot spot versus about 16 minutes per day per hot spot in the control group. The treatment group hot spots also received 75 % more PCSO visits per day, on average, relative to the control group hot spots. These findings show far more clearly than any recent hot spot experiment the exact magnitude of the difference in "dosage" measures of policing, as well as a means by which to estimate police compliance with the treatment program—which was substantial, but by no means perfect. While the PCSOs were tasked to provide 3 visits per day, for 45 minutes in total per day (that is, 15-minute patrols) in all hot spots, they delivered more visits, but fewer minutes, than required. In none of the comparisons of patrol minutes did the PCSOs deliver the treatment plan of 45 minutes per day, although they were only 18 % short of the goal (overall mean 37 minutes). In contrast, the number of visits exceeded the treatment plan by 57 %, with an "overdose" of 4.7 patrol visits per hot spot per day. The best news was that, while Police Constables made many more visits to the hot spots than did the PCSOs, there was minimal difference between treatment groups either in the minutes or visits delivered by PCs. These "fully weaponized" (for Britain) Police Constables spent on average 28 minutes per day per hot spot across all 72 of the hot spots, with double the number of visits per day (8.5) than the PCSOs. But as Table 2 shows, the 21 % fewer patrol visits PCs made in the treatment hot spots relative to controls (8.53 PC visits per day per treatment hot spot vs. 10.5 for controls) was not a significant difference (p = .416). Neither was the 8 % more PC patrol minutes (28.29 per hot spot per day) in the treatment group hot spots than in the controls (26.0 per hot spot per day) (p = .488). By combining the measures of PCSO and PC activity, we can answer an additional research question: how much more policing did the treatment group have, overall, than the control group? For patrol minutes, we can report that the treatment group had a combined mean of PC and PCSO patrol time of 65.7 minutes per day per hot spot, compared to a combined mean total of 42.0 minutes at each control group hot spot. For patrol visits, the comparable figures are 13 visits per day per treatment group hot spot compared to 13.16 per spot for the controls. Thus, while our experiment created a 56 % increase in total patrol time in the treatment group relative to controls, it created what was essentially no difference in (or slightly fewer) patrol visits in the treatment group. These mean differences per hot spot create an interesting contrast to the pooled data across all treatment group hot spots. As reported in Table 2, the total mean number of PCSO and PC patrols and the total mean PCSO and PC time spent in hot spots in the treatment and control conditions are nearly identical (4761 vs. 4794 patrol visits and 1058 vs. 1072 hours, respectively). This finding, along with the nonsignificant differences in terms of the "amount" of visibility of warranted police constables in the hot spots, suggests that should any treatment effect be observed, it is unlikely to be attributed to overall combination of "hard" and "soft" police presence, but rather to large differences between treatment groups in "soft" policing delivered by the PCSOs. Main effects of soft policing Table 3 lists the difference-in-differences in means (DID) for calls for service and crimes per hot spot, while comparing the pre-treatment and post-treatment figures for each hot spot. As shown, the crime DID for the 34 treatment hot spots was −5.85 (SD = 10.190) and for CFS the DID was −207.7 (SD = 166.598), while the crime DID for the 38 control hot spots was −3.55 (SD = 13.647) and for CFS the DID was −173.37 (SD = 158.046). These figures represent, for crimes, a 64.8 % greater reduction in crimes per hot spot in the treatment group relative to the controls,Footnote 6 with an absolute decline of 16.00 % in mean crimes per treatment hot spot relative to the absolute decline of 10.5 % crimes per control group hot spot. Had the treatment hot spots experienced a 10.5 % decline, they would have had 130 fewer crimes. Instead, the treatment group had 199 fewer crimes, or 68 fewer crimes than if they had generated the same reduction as in the control group or a mean of 2.6 more crimes prevented per hot spot. They also indicate a 19.79 % relatively greater mean reduction in CFS per treatment hot spot than the control group's absolute percentage decline of 51.2 %, compared to a 61 % decline in the treatment group. Had there been a 51.2 % decline in the 34 treatment area baseline mean of 382 CFS per treatment group hot spot (the magnitude of the control group's drop), it would have yielded a new mean of 186.4 CFS per hot spot. Instead, the treatment group mean was 174.7 per hot spot during the experimental year, a mean difference of 11.7 calls per hot spot, or 399 CFS prevented. Table 3 Difference-in-differences in means (DID) calls for service and crimes (baseline and post-random assignment): treatment versus control hot spots As explained earlier, we converted these scores to standardized mean differences (Cohen's d), in order to be able to compare our results to the existing body of literature. These results for "soft" policing crime are highly comparable to the results for "hard" policing. The effect size for reported crime data yields an effect size of d = −.189 (95 % CI −.653, .27), and the effect size for calls for service is d = −.211 (95 % CI −.676, .252). These results mirror the overall treatment effect of hot spots policing found in Braga et al.'s (2012:58) systematic review (d = −.184). Finally, we have counted nil reported incidents of assault or physical abuse against the PCSOs. Not one incident was identified in any of the crime reports or Calls for Service for the 72 hot spots during the 12 months of the experiment. Measuring displacement This section reports our analysis of displacement using the WDQ procedures described above. As Table 3 shows, the target hot spots in the treatment group experienced a reduction during the experiment to 1044 victim-generated crimes, from 1243 victim-generated crimes in the baseline year for a gross effect (GE) in the treatment group of 199 fewer crimes. The net effect (NE), which takes into account the baseline period for the target areas of both treatment and control (1243 and 1282, respectively) is (1243/1282) – (1044/1147), or 0.06. The positive number indicates that there was a decrease in crime in the treatment target area that was greater than the changes in the target control area. Next, the WDQ was computed in order to determine displacement into the 50-meter buffer zones around the target areas, or the presence of displacement in relation to changes in the treatment and control areas (Eq. 1)Footnote 7: $$ WDQ = \frac{\left(486/1147\right)-\left(533/1282\right)\ }{\left(1044/1147\right)-\left(1243/1282\right)} = -.134 $$ $$ TNE = \left[1243*\left(1147/1282\right)-1044\left]+\right[533*\left(1147/1282\right)-486\right] = 58.98 $$ As shown in Eq. 1, the WDQ yielded a negative outcome, which indicates that there was a displacement effect. However, because the number is less than negative one (<−1), it means that the displacement was not greater than the reduction achieved in the intervention area. This outcome becomes clearer when looking at the overall impact of the RCT, as determined by the TNE model: the TNE gives the overall outcome of the project (Eq. 2). The TNE of this RCT were positive and relatively large (Guerette 2009), suggesting substantial treatment effect: The overall reduction in crime relative to control areas—adjusting for estimated displacement effects—was 59 crimes prevented (Eq. 2). That number would be higher if we did not count 100 % of the increase in crime in the displacement catchment areas as crimes that would have occurred in the hot spots had the extra PCSO patrols not pushed those crimes "around the corner" (Weisburd et al. 2006). Outcome variations exist when observing the treatment effect on specific crime categories (Table 4), with the most pronounced effect found for burglaries, thefts, criminal damage and grievous bodily harm (against person) offenses: in all of these categories, not only were the total net effects above 20 crimes, but there was also a diffusion of the benefits effect that was greater than the reduction achieved in the intervention area, most notably in the cases of theft, burglaries and crimes against persons. Table 4 Measures of diffusion of benefits (weighted displacement quotient) Estimating the treatment effect with frequency vs. duration in hot spots To test the effect of treatment delivery with the GPS data, we used a linear model. The results are displayed in Table 5. We present the treatment condition as a predictor, and post-random assignment incident data as a dependent variable; we included the GPS-related variables in the model (covariates): the frequency of patrols per day and the number of minutes the PCSOs spent per day in the hot spots. We then included the interaction terms of each of these covariates with the treatment predictor. We also controlled for the visits of the Police Constables. Table 5 Estimating the treatment effect with GPS data (n patrols per day and n minutes per day): calls for service (CFS) and crime data Model I examines the effect of PCSO foot patrols on calls for service. It shows that the main effect of randomly assigned treatment condition on CFS was only marginally significant (β = .48; p = .13) when all other variables are controlled for. The effect of Police Constables, expected from Table 2, was negligible and nonsignificant. In contrast, the variations across the 72 hot spots in both duration (of the total PCSO time present in hot spots) as well as frequency (number of discrete PCSO visits) were significant predictors of CFS reductions [(β = .85; p ≤ .01); (β = .52; p ≤ .01) respectively]. Total time spent in hot spots was slightly more important in Model I than the number of visits per day. Yet when we look at the interaction term between group assignment and frequency in Model 1, we see that with every 1 additional PCSO visit per day in the treatment hot spots, the number of calls for service decreases across the group by approximately 34 calls for service (β = −.67; p ≤ .1), when all other variables in the model are controlled. Similarly, when looking at the interaction effect of duration within the hot spots, Model 1 suggests that an increase of 1 additional minute in the hot spot is associated with 5 fewer calls for service (β = −.87; p ≤ .05). This also happens to be the strongest predictor in the model. Overall, the model explains 32 % of the variance. Model II uses crime reports as the dependent variable. It shows, once again, that group assignment is not statistically significant as a stand-alone predictor when all other variables are controlled. Police Constable visits, as expected, remain nonsignificant and negligible. Yet as we found in Model I for CFS, Model II shows that both duration and frequency of PCSO visits were significant predictors of crime reduction, and time spent at hot spots carries a greater weight [(β = .88; p ≤ .01), (β = .28; p ≤ .1), respectively]. The interaction term for duration × treatment, again, had a greater impact on crimes compared to the interaction term for frequency alone [(β = −.73; p ≤ .05), (β = .5; p ≤ .15), respectively], while the latter is only marginally significant. With every 1 additional visit to the treatment hot spots per day, the number of crimes declined by approximately 4, and with every additional 1 minute in the treatment hot spots the number of crimes was reduced by 0.7. Overall, the model explains 25 % of the variance. We then computed the estimated marginal means, which report the mean response for each factor, adjusted for all other covariates in the model. We ran the analysis twice. One analysis focused on the interaction term of duration (therefore holding frequency constant). A second analysis focused on the interaction term of frequency of visits (holding duration constant). The covariates in the first analysis were fixed at the following mean values: police constables' visits = 3.488.8; PCSOs' minutes per day = 26.1. The covariates in the second analysis were fixed at the following mean values: police constables' visits = 3,488.8 and PCSOs' visits per day = 3.6. Scatterplots in Figs. 1 and 2 depict our analyses of the changes in the number of incidents with every additional minute spent patrolling the hot spots, or with every additional patrol per day, while controlling for the covariates. The scatterplots show logarithmic trend lines for each group (treatment vs. control). Treatment vs. control conditions: Estimated marginal means (incidents) by number of PCSOs patrols per day Treatment vs. control conditions: Estimated marginal means (incidents) by number of PCSOs minutes per patrol As shown in Fig. 1, more frequent PCSO patrol visits in treatment hot spots are associated with fewer incidents, while more frequent PCSO patrol visits in the control hot spots are associated with more incidents. In nearly every comparison between the groups on the x-axis (patrols per day), the treatment patrols can be linked to fewer incidents, compared to control conditions. The trend lines seem to depart further away with more patrols in the hot spots, with the largest estimated difference between treatment and control conditions found at the upper-bound number of visits per day (n = 8). A similar pattern emerges in terms of time on patrol (Fig. 2), when comparing total time spent in the hot spots by PCSOs in the treatment hot spots to their time in the control hot spots. The trend line once again suggests that more time spent in control hot spots is associated with more incidents. There is also some increase in the number of incidents in the treatment hot spots (with more time spent in these hot spots); however, the pattern is less pronounced than in the control group. Dosage to response benefit: a crime harm index analysis Combining data from Tables 2 and 3, we computed the crime prevented in relation to soft policing activities. With 21 extra minutes of PCSO time per day in each of the 34 treatment hot spots = 714 minutes per day, × 5 days per week = 3,570 minutes, × 52 weeks = we see a total additional resource commitment of 185,640 minutes of PCSO time (3,094 hours) across all treatment hot spots. That seems roughly equivalent to the cost of two fulltime PCSOs, presently no more than £50,000 (Boyd et al. 2011). On that basis, we estimate that 68 fewer crimes and 399 CFS were prevented. In terms of PCSO patrol visits, the resource difference was 2 visits per day × 34 hot spots = 68 visits, × 260 days = 17,680 visits, or one crime prevented for every 260 visits, and one CFS prevented for every 44 visits. Yet this dosage to benefit analysis is deeply flawed by a patently untrue assumption: that all crimes are created equal. The crimes prevented in this experiment ranged from theft to grievous bodily harm (GBH), which is close to attempted murder. What we lack is precise classifications of each offense, which would allow us to be more precise in weighting them according to the sentences for each offense type recommended by the Sentencing Council for England and Wales. The recommendation for GBH, for example, is 15 days imprisonment if the crime is committed without intent, but 1460 days if the crime is committed with intent. Because this is the widest range for any crime type, we offer a low and a high estimate for the CHI value of crime prevented based upon which type of GBH is selected. Table 6 parses the difference-in-difference estimate of crimes prevented by category, and does not compute a total difference. Its purpose is to demonstrate the method used to produce Table 7, which shows the CHI value for each offense type derived from the Sentencing Guidelines for England and Wales. As Table 7 shows, the estimate varies widely depending on whether we assume that the GBH offenses are committed with or without intent. Overall, there was a reduction in CHI in nearly all crime-specific categories (except a minor increase in sex crimes and common assault).Footnote 8 Table 6 Difference-in-difference analysis by type of crime Table 7 Cambridge crime harm index impact analysis Assuming that the GBH crimes in Table 7 were not committed with intent, we see that the equivalent of two extra PCSO officers across 34 hot spots prevented about 86 days of potential imprisonment at each of 34 hot spots in the treatment group, or 2914 days of imprisonment (had every crime had just one offender who was convicted and sentenced at the starting point). That amount is about 8 years of imprisonment, at a minimum cost to the public of £35,000 per year, which is equal (× 8) to £280,000. Divided by the roughly £50,000 cost of the PCSOs, this makes the return on investment of PCSOs in hot spots as busy as Peterborough's approximately £5.6 saved for every to £1 invested in PCSOs. If we assume that every GBH prevented was committed with intent, the cost to benefit ratio is far higher. As Table 7 shows, with that assumption, the total days of imprisonment prevented per hot spot over one year would be 360, times 34 hot spots = 12,240 days or 34 years of imprisonment (costing £1.17 million) with an investment of only £50,000, a return on investment of over £23 saved for every £1 invested in PCSO patrol. By casting the results of this experiment in much more concrete terms for public understanding, the use of the CHI may be a far more powerful way to present these findings. Our experiment with one of England and Wales' largest forces attempted to "cool down" 34 of the hottest 72 crime and disorder hot spots in an ancient, Norman Cathedral city. The treatment and control group did not differ in the background levels of Police Constables' presence, largely comprised of responses to citizen-generated calls for service. The only difference between the treatment and control groups was more total time and separate visits to the treatment hot spots by "soft policing" civilian staff who hold no special powers of arrest, carry no weapons or handcuffs, and patrol on foot by themselves. These PCSOS visited the treatment hot spots 4.65 times per day, for an average of 37 minutes a day (or 8 minutes per visit), over 12 months. We tracked these visits for their precise dosage by monitoring the GPS transponders in each PCSO's (and each PC's) body-worn radio. All hot spots were geo-fenced and the number of visits and the number of minutes spent in the hot spots, by all officers on the force, were recorded using personally-issued GPS trackers, allowing us to measure the precise dosage of the intervention. We compared these patrols to lower levels of both independent variables in the control conditions. The police force was not made aware of the location of the control hot spots, in order to avoid contamination. We measured the outcomes of the treatment differences by counting the number of emergency incidents and the number of victim-generated crimes before and after assigning the 72 hot spots into experimental and control groups. Finally, we measured for a possible displacement effect, addressing the continuing concern many have about hot spots policing despite a large body of literature to the contrary. Police Community Support Officers were not authorized by Parliament with a specific purpose of "fighting crime", let alone to cool down hot spots, or target-harden vulnerable or criminogenic places. Their training, equipment and legal powers are ostensibly about reassurance and "soft" community policing, not "crime-fighting." Nor can they confront crime in the classic correspondence between the police and offenders: with the use of force as necessary, threat of immediate use of force, or both. Nevertheless, the results of this experiment suggest that more time on patrol and visits to hot spots by PCSOs can in fact prevent crime and disorder. Particularly when controlling for the effect of Police Constables' presence in the hot spots, the foot patrols by lone PCSOs have caused clear reductions in the number of incidents and the number of crimes, compared to control conditions, even after accounting for minor displacement of crimes to the vicinity around the target areas, as shown in the WDQ model. The effect of this "soft policing" role, as measured by the standardized mean difference between treatment and control groups, is very similar to the average effects of "hard policing" increases in hot spots, as estimated in the systematic reviews by Braga et al. (2012) and Braga and Clarke (2014). The results further suggest that PCSO "saturation" in the treatment compared to control hot spots, even of these "soft" officers, was actually somewhat marginal, adding only 37 minutes per day per hot spot to the 28 minutes per day provided by Police Constables in response policing (Table 2). What do these findings mean for both deterrence theory and police policy? For theory, they suggest that the probability of encountering an agent of the state is more important than the severity of the summary response an agent can make. For police policy, they suggest that the frequency and duration of visits deserves far more attention and further research. Yet the limitations of our research must lead to caution in drawing either conclusion as a settled answer to questions that may still have different answers in different communities or countries. Implications for deterrence theory Our main theoretical interest in this study was the question of whether "soft" policing in hot spots could achieve effect sizes comparable to those of "hard" policing in hot spots. Our finding that it did, at least in this experiment, has major implications for deterrence theory. What is it about the presence of a police uniform that cools down a crime hot spot? If it is the threat of immediate arrest and detention, that is a more severe response than PCSOs are able to invoke. Since they managed to reduce crime without that level of severity, the result suggests that deterrence may be associated more with any engagement by the state, with even weak agents deploying a powerful symbol of the entire police apparatus to which they are connected and can quickly mobilize with their radios. While this study cannot pierce the walls of the "black box" of how offenders make decisions to commit crimes, it can observe that the objectively lower level of police powers in this experiment did nothing to reduce the magnitude of crime reduction the "soft" police achieved compared to what other experiments found with hard policing. We did not aim to understand offenders' decision-making processes through surveys or qualitative methods. We cannot measure to what extent people perceived the deterrence message carried by PCSOs as effective or not. Yet the data demonstrate behavioral changes in hot spot populations that took effect once PCSOs became more visible in the hot spots. In the treatment group compared to control hot spots, offenders did not commit as many crimes, nor were their crimes markedly displaced to other locations in the vicinity of the targeted hot spots. Incidents were prevented by a handful of unpredictable daily visits of "moving insignias" to the hot spots averaging 8 minutes, and lasting not more than 10–15 minutes. Even though a single PCSO cannot exercise force, the PCSO can present a clear message of power: "Beware. I can summon many police officers instantly, and testify against you in court." We therefore interpret the evidence to suggest that the threat of sanctions may not necessarily be about the severity of force each agent can deploy on the spot, but rather the agent's symbolic demonstration of the power of the police organization (Butterworth 1999; Deheane 1997; Pierce 1885). The extra presence in the hot spots of official power-holders, even though they were not weapon-holders, was causally related to crime prevention. Another strong conceptual link between this policing tactic (i.e., soft power foot patrols) and deterrence theory is the repetition and intensity of the signal. There may well be a critical aspect of deterrence provided by foot patrol which is missed in vehicle-based patrols: direct and proximate co-presence of power-holders and members of the public (see Collins 2014 on the emotional impact of co-presence in Goffmanian interaction rituals, including restorative justice conferences led by police). When a police vehicle quickly drives by citizens on the street without even looking at them, public–police encounters are necessarily swift and lacking emotional impact. When a PCSO walks alone past people at a distance of under five meters, the likelihood of eye contact and perhaps even "hello" is probably far greater than if the same officer was in a car. Thus, if Police Constables are "'too busy' to walk the streets and prevent crime" (Marsden 2013) and are replaced by PCSOs, offenders may limit their criminal activities more than if hot spots are patrolled by invisible people inside cars. The repetitive and stable visits to the hot spots sent a clear message: "We see you often, and your face is familiar to us. We are coming back and will know you to be a law-obeyer, so please remain so!" There is, however, a strong assumption in this theoretical analysis that deterrence depends on people's awareness of the increased number of police visits to the hot spots with increased minutes of patrol. We have no evidence to offer on that causal link. It is one possible component of a "black box" of the causal mechanism that our input–outcome analysis suggests is true. But why the PCSO patrols reduced crime cannot be revealed by our methods. We can say that the PCSOs caused the crime reductions, but we cannot describe the micro-mediation of the causal mechanism. Implications for policing policy One implication of these results addresses an intense debate in contemporary Britain. The PCSOs conducted single-person foot patrols, which had been a classic role for British "Bobbies" since 1829, but is no longer the case in most forces in England and Wales. Contemporary policing is characterized by a "reactive, fire-brigade" style of policing in automobiles (Wakefield 2006, p. 16). As one official report suggested, "bobbies on the beat are disappearing from swathes of the country and being replaced by community support officers", primarily due to budgetary constraints (Her Majesty's Inspectorate of Constabulary 2010). Instead, it is the PCSOs who walk the beats, while the predominant role of Police Constables is to respond to calls for service and attend non-emergency jobs by car (Wain and Ariel 2014). Thus, PCSOs are now the primary power-agents on foot in most forces in England and Wales. Read this way, our study joins Ratcliffe et al.'s (2011) Philadelphia foot patrol study. Our evidence strengthens support for the conclusion that this historic method of policing is still effective, at least in cities with substantial pedestrian populations on public sidewalks and plazas. At first glance, this study may appear to be just one more experiment in hot spots policing with unsurprising results, given the accumulated evidence (e.g., Braga et al. 2012). Yet, we must recall that few if any recent lines of research in criminology have led to as much change in public policy as place-based initiatives (Weisburd et al. 2012, but cf. Weisburd 2015). Few policies developed through criminological research and development have even been proclaimed "successful" (Skogan and Frydl 2004), as hot spots policing has been. As we noted at the outset, "putting cops on the dots" is steadfastly turning into part of modern policing's DNA (Telep and Weisburd 2014). Yet, much like other matured areas of research, the more we know, the more granular and reliable our recommendations for policy can be. We envisage that our findings have implications for greater precision and reliability on at least one broad domain of hot spots policing strategy: the allocation of patrol dosage. The question of dosage has fed the policing literature for some time now. How many officers does the force need (Wilson and Weiss 2012)? What is the optimal length of time for each visit of police to a hot spot (Koper 1995)? What is the right staffing model (Bittner 1990)? How do officers spend their time in the hot spots, or indeed any communities (Parks et al. 1999; see also Webster 1970)? These are clearly important questions, particularly in an era of austerity when more than 80 % of police budget is allocated to salaries. However, as Sherman (2013) argues, the degree of sophistication of the available evidence has been quite limited and we are left with rather poor studies on the tracking of policing. As this experiment demonstrates, that situation is now changing with the advent of officer-specific GPS tracking data. These GPS data allow us to make two novel observations from this experiment. One is a phenomenon we call "Reiss's Reward," revealing a crucial difference between proactive and reactive patrolling. The other is a possible falsification of the hypothesis that the more time police spend in a hot spot, the less crime there will be. Reiss's reward Our study allows us to speculate beyond the Koper Curve (Koper 1995), which did not distinguish between proactively and reactively generated visits. The GPS data reveal two distinct patterns of the link between patrol dosage and crime, implying we can hypothesize that crime reductions are driven more by the number of discrete patrol visits than by the average number of minutes of patrol they deliver (Table 5). That is just what we observe when the visits are proactively generated at random times. Yet, as Fig. 1 shows, we observe exactly the opposite phenomenon in the control group condition, where the patrol visits are reactively generated after someone has committed—and perhaps gotten away with—a crime. While the proactive visits may often surprise (and chasten) those considering the commission of a crime, the reactive visits may advertise the success of the last person who got away with a crime, thus encouraging more crime. As further shown in Fig. 1, the log-linear trend lines suggest an increasing gap as the number of patrols within the hot spots: a decrease in the number of events in treatment conditions and an increase in the number of events in the control conditions, while controlling for the effect of covariates. The more PCSOs visit the hot spots, the larger the effect on crime and disorder. We hereby label this gap as "Reiss's Reward," in honor of the late criminologist Albert J. Reiss, Jr., to whom the Oxford English Dictionary credits the invention of the word "proactive," and whose seminal work on the difference between proactive and reactive policing (Reiss 1971) stimulated the first hot spots patrol experiment to be carried out by two of his former students (Sherman and Weisburd 1995). We suggest that "Reiss's Reward" explains the increase in the number of crimes and calls with increasing numbers of police visits across the 38 hot spots randomly assigned to the control group, even while more proactively generated patrol visits in the experimental condition predicted less crime. The "reward" is that, as Reiss might have said, "If police do proactive work to prevent crime, they will be rewarded with less reactive work to investigate crime." Under business-as-usual response policing conditions (as in the control group), PCSOs (or Police Constables in automobiles) visit hot spots after crimes have already occurred; that would explain why police visits and incident counts would be positively correlated with one another in the control conditions. In the treatment hot spots, however, we observed the opposite relationship. Because we know that in treatment group hot spots PCSO visits were proactively assigned (while in control group hot spots they were not), we can infer that the proactive PCSO visits prevent the events from happening in the first place. Thus in treatment conditions, more police patrol visits cause less crime. The treatment group trend line represents prevention, while the control group trend line represents reactive policing. Total patrol time Unlike the pattern with frequency of visits, the duration of all patrol time at each hot spot (Fig. 2) seems to have little relationship with crime at treatment hot spots, while more time spent in control hot spots was still associated with more incidents. Thus, the "Reiss's Reward" logic we posited about the relationship between frequency of visits and crime in treatment hot spots cannot be found with more duration of patrol time. In the control hot spots, however, more incidents are associated with more time spent in the hot spots, as well as frequency. Crucially, however, when comparing the treatment and control groups, at virtually every point on the x-axis there seems to be clear differences between the two groups. Collectively, these two graphs do suggest that we need to focus our attention not just on duration, but also—and perhaps more so—on the number of times officers go to the hot spots. In our mind, pursuing this question is a major policy implication to explore more robustly in future research. The kind of research design needed to definitively compare frequency versus duration of hot spots patrol visits is, as ever, a large randomized controlled trial. A small trial of this comparison by Police Constables and PCSOs on foot patrol was recently conducted in central Birmingham, England (Williams 2016), and another in the Police Service of Northern Ireland (Goddard and Ariel 2014), but larger trials including motor vehicle patrols are also needed. No other direction for improved precision in preventive policing strategy seems more important than, in effect, randomly assigning the different levels of patrol duration identified non-experimentally as the Koper Curve (Koper 1995). The implication from this study is to deploy randomly assigned comparisons of different numbers of visits to two or more groups of hot spots while holding total minutes of patrol constant. Yet, even theorizing about such designs brings the discussion back to what is actually achievable in police agencies on the streets. Implications for future research on hot spots and deterrence Treatment integrity and hot spots' shapes It was difficult for the officers to maintain treatment integrity in the sense of consistent delivery of 15-minute patrols, 3 times per shift, over time. Why was this the case? One major reason was the shape of our hot spots: polygons are difficult to manage. While there are ample statistical reasons for using polygons in the analysis of hot spots, they pose operational difficulties. Patrol officers are often drawn to vulnerable facilities or crime generators (see Bowers 2014), and are likely to "spread the patrol thinly" within the polygon. When the hot spots are shaped like polygons, it is also more likely that the patrol will "spill over" the boundaries of the hot spot (see Sorg et al. 2014), since officers are less likely to construct their patrols within what seem to be arbitrary digital lines on the map. In these cases, the expected treatment can fluctuate greatly between hot spots. Future research should revisit the polygon-ic approach and implement street-segments approach instead (see Weisburd et al. 2012). Treatment delivery A major lesson we learned from the use of GPS trackers is how much crime reduction can come out of relatively small increases in policing. When observing how little time PCSOs spent in the hot spots compared to all other officers on the force, it is quite surprising that any significant treatment effect would be detected. Future studies on hot spots policing must now acknowledge that "saturation" or "more policing" means very little without the accurate estimation of dosage delivery, in both treatment and control conditions, for the ring-fenced team of officers who take part in the experiment as well as for all the other uniformed officers in the agency. Most hot spots policing experiments thus far were unable to measure separately the interaction between the effect of the proactive patrols of hot spots officers and the effect of all other police units; absence of suitable measures made those analyses impossible. With better technology, we were able to do so. Yet without body-worn videos (e.g., Ariel et al. 2014), we were unable to provide any qualitative analyses comparing PCSOs to Police Constables in their interactions with the public, something future research should pursue. Additional study limitations There are two additional limitations, beyond the concerns raised above, which future research should address. In the first place, we have no measure of actual delivery of policing tactics. What precisely did the PCSOs do while on patrol? How many stop-and-searches did they conduct? With whom have they engaged? How did they engage? How many arrests did they make? While we lacked funding to pursue these questions, what police do in hot spots is a practical as well as theoretical concern. A second limitation is our inability to explain how PCSOs affect perceptions of the police. As PCSOs were initially introduced to reduce the "reassurance gap" in British policing, we failed to test how these kinds of "soft" policing approaches affect fear of crime, collective efficacy and satisfaction with the police more broadly. Finally, the question of displacement of crime remains unresolved in public policy debates. Our study found evidence of diffusion of benefits to the vicinity around the hot spots, and this finding joins an increasing number of studies finding the same result (Bowers et al. 2011). Yet, we remain skeptical that our measurement of displacement is fully comprehensive. First, our study did not observe potential displacement to a not-so-near vicinity of the hot spots (but cf. Telep et al. 2014). Second, hot spot studies need to focus more thoroughly on the possibility of displacement in terms of modus operandi along with spatiotemporal transition of crime, which our data could not address. Allocating crime hot spots to be patrolled by "soft" policing officers who lack arrest powers and weapons can reduce crime and disorder. Experimental evidence from Peterborough's hottest hot spots over a 12-month period shows that calls for service were reduced by approximately 20 % and victim-generated crimes were reduced by 39 % by PCSO patrols, compared to control conditions. Utilizing GPS tracking of all officers in the city, we held constant the number of patrols provided by Police Constables across the 72 hot spots. Based solely on extra patrols provided by PCSOs, the study finds that such "soft policing" reduced crime and calls for service without spatial displacement into the immediate vicinity of the hot spots. Since the magnitude of the effect sizes is highly comparable to what previous studies have found from increasing "hard" policing, we conclude that "soft" policing can achieve comparable crime reductions without displaying a threat of immediate use of force. One could also argue that the uniform itself protects the officer from assaults; when a police officer has a distinguishable uniform, it can help prevent his or her injury or death. http://www.loc.gov/law/help/police-weapons/index.php?loclr=bloglaw The catchment zone radii are relatively small (cf. Weisburd et al. 2006), because street segment lengths in Peterborough are often shorter than 50 meters. Moreover, due to the city size, increasing the buffer zones would reduce the number of eligible hot spots even further, thus jeopardising the statistical power of the test. A member of the research team tested the accuracy of the GPS recording by comparing the readings provided by the GPS trackers compared to manual recording of time spent in the hot spot. Common assault (the lowest category of assault, including slapping and pushing), grievous bodily harm, harassment, etc. Calculated by first subtracting the treatment value from the control value, and the dividing the change by the control, multiplied by 100 to convert into percentages We were not granted access to measure the presence of officers in the catchment areas via GPS data. While these two offense types did have very small increases, the purpose of CHI is to combine increases and decreases in a rational fashion to compute a bottom line. That is why, no matter what we assume about GBH, there is still a clear reduction in CHI for the experimental group. The sum of the days of potential imprisonment prevented across all of the crime types that produces our CHI prevention estimates. Andenæs, J. (1974). Punishment and deterrence. Ann Arbor: University of Michigan Press. Apel, R. (2013). Sanctions, perceptions, and crime: implications for criminal deterrence. Journal of Quantitative Criminology, 29, 67–101. doi:10.1007/s10940-012-9170-1. Apel, R., & Nagin, D. S. (2014). Deterrence. In Encyclopedia of criminology and criminal justice (pp. 998–1005). New York: Routledge. Ariel, B. (2012). Deterrence and moral persuasion effects on corporate tax compliance: findings from a randomized controlled trial. Criminology, 50(1), 27–69. doi:10.1111/j.1745-9125.2011.00256.x. Ariel, B. (2014). A Direct Test of "Local Deterrence" and "Deterrence Radiation": The London Bus Experiment. Presented at the University of London International Crime Science Conference (British Library, July 17 2014). Ariel, B., Farrar, W. A., & Sutherland, A. (2014). The effect of police body-worn cameras on use of force and citizens' complaints against the police: a randomized controlled trial. Journal of Quantitative Criminology. doi:10.1007/s10940-014-9236-3. Berk, R., & MacDonald, J. (2010). Policing the homeless: an evaluation of efforts to reduce homeless-related crime. Criminology & Public Policy, 9(4), 813–840. doi:10.1111/j.1745-9133.2010.00673.x. Bittner, E. (1990). Some reflections on staffing problem-oriented policing. American Journal of Police, 9, 189–196. Bland, M., & Ariel, B. (2015). Targeting escalation in reported domestic abuse evidence from 36,000 callouts. International Criminal Justice Review, 25(1), 30–53. Bottoms, A. E., & Tankebe, J. (2012). Criminology: beyond procedural justice: a dialogic approach to legitimacy in criminal justice. Journal of Criminal Law & Criminology, 102(1), 119–170. http://www.jstor.org/stable/pdf/23145787.pdf?_=1461676836427 Bouchard, M., & Tremblay, P. (2005). Risks of arrest across drug markets: a capture-recapture analysis of "hidden" dealer and user populations. Journal of Drug Issues, 35(4), 733–754. Bowers, K. J. (2014). Risky facilities: crime radiators or crime absorbers? A comparison of internal and external levels of theft. Journal of Quantitative Criminology, 30, 389–414. doi:10.1007/s10940-013-9208-z. Bowers, K. J., & Johnson, S. D. (2003). Measuring the geographical displacement of crime. Journal of Quantitative Criminology, 19(3), 275–301. Bowers, K. J., Johnson, S. D., Guerette, R. T., Summers, L., & Poynton, S. (2011). Spatial displacement and diffusion of benefits among geographically focused policing initiatives: a meta-analytical review. Journal of Experimental Criminology, 7(4), 347–374. doi:10.1007/s11292-011-9134-8. Boyd, E., Geoghegan, R. & Gibbs, B. (2011). Cost of the cops: Manpower and deployment in policing, London. Available at: www.policyexchange.org.uk. Braga, A. A., & Bond, B. J. (2008). Policing crime and disorder hot spots: a randomized controlled trial. Criminology, 46(3), 577–607. Braga, A. A., & Clarke, R. V. (2014). Explaining high-risk concentrations of crime in the city: social disorganization, crime opportunities; and important next steps. Journal of Research in Crime and Delinquency, 51, 480–498. doi:10.1177/0022427814521217. Braga, A. A., Weisburd, D. L., Waring, E. J., Mazerolle, L., Spelman, W., & Gajewski, F. (1999). Problem-oriented policing in violent crime places: a randomized controlled experiment*. Criminology, 37(3), 541–580. doi:10.1111/j.1745-9125.1999.tb00496.x. Braga, A., Flynn, E., Kelling, G., & Cole, C. (2011). Moving the Work of Criminal Investigators Towards Crime Control. New Perspectives in Policing, 1–37. Retrieved from https://www.ncjrs.gov/App/Publications/abstract.aspx?ID=255091 Braga, A. A., Papachristos, A. V., & Hureau, D. M. (2012). Hot spots policing effects on crime. Campbell Systematic Reviews, 8, 1–96. doi:10.4073/csr.2012.8. Brantingham, P. J., & Brantingham, P. L. (1993a). Environment, routine and situation: toward a pattern theory of crime. Advances in Criminological Theory, 5, 259–294. Brantingham, P. L., & Brantingham, P. J. (1993b). Nodes, paths and edges: considerations on the complexity of crime and the physical environment. Journal of Environmental Psychology, 13, 3–28. Burke, R. H. (2004). Hard cop, soft cop: Dilemmas and debates in contemporary policing. In R. H. Burke (Ed.), Hard cop, soft cop: Dilemmas and debates in contemporary policing (p. 310). Cullompton: Willan. Bushway, S., & Reuter, P. (2008). Economists' contribution to the study of crime and the criminal justice system. Crime and Justice: A Review of Research, 37(1), 389–451. doi:10.1086/524283. Butterworth, B. (1999). The mathematical brain. London: Macmillan. Caeti, T.J. (1999). Houston's targeted beat program: A quasi-experimental test of police patrol strategies. Huntsville: Sam Houston State University. Chamlin, M. B. (1991). A longitudinal analysis of the arrest-crime relationship: a further examination of the tipping effect. Justice Quarterly, 8(2), 187–199. Clarke, R. V., & Felson, M. (1993). Introduction: criminology, routine activity,and rational choice.In R. V. Clarke, & M. Felson. (Eds.), RoutineActivity and Rational Choice. (Advances in Criminological Theory,vol. 5.). New Brunswick: Transaction Press. Clarke, R. V., & Weisburd, D. L. (1994). Diffusion of crime control benefits: Observations on the reverse of displacement. In R. V. Clarke (Ed.), Crime prevention studies (Vol. 2, pp. 165–183). Monsey: Criminal Justice Press. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Lawrence Erlbaum. Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: a routine activity approach. American Sociological Review, 44, 588. doi:10.2307/2094589. Collins, R. (2004). Inteaction ritual chains. Princeton: Princeton University Press. Cornish, D. B., & Clarke, R. V. (1986). Introduction. In D. B. Cornish & R. V. Clarke (Eds.), The reasoning criminal: Rational choice perspectives on offending (pp. 1–16). New York: Springer. Cullen, F. T., Wright, J. P., Blevins, K. R., Daigle, L., & Madensen, T. D. (2008). The empirical status of deterrence theory: A meta-analysis. In F. T. Cullen, J. P. Wright, & K. R. Blevins (Eds.), Taking stock: The status of criminological theory advances in criminological theory (Volume 15) (pp. 367–395). New Brunswick: Transaction. De Camargo, C. (2012). The police uniform: power, authority and culture. Internet Journal of Criminology, 1, 1–58. doi:10.1358/pojo.2009.82.3.485. Deangelo, G., & Hansen, B. (2014). Life and death in the fast lane: police enforcement and traffic fatalities. American Economic Journal: Economic Policy, 6, 231–257. Retrieved from http://www.aeaweb.org/articles.php?f=s&doi=10.1257/pol.6.2.231. Deheane, S. (1997). The number sense: How the mind creates mathematics. Oxford: Oxford University Press. Devos, K. (2014). Factors influencing individual taxpayer compliance behaviour (pp. 67–98). Dordrecht: Springer. Di Tella, R., & Schargrodsky, E. (2004). Do police reduce crime? Estimates using the allocation of police forces after a terrorist attack. The American Economic Review, 94. Durlauf, S.N., Nagin, D.S. (2011). Imprisonment and crime: Can both be reduced? Criminology & Public Policy, 10(1), 13–54. doi:10.1111/j.1745-9133.2010.00680.x. Eck, J.E., Chainey, S., Cameron, J.G., Leitner, M., & Wilson, R.E. (2005). Mapping crime: Understanding hot spots (p. 79). Washington, DC: National Institute of Justice. Erickson, M. L., Gibbs, J. P., & Jensen, G. F. (1977). The deterrence doctrine and the perceived certainty of legal punishments. American Sociological Review, 42(2), 305–317. Form, W. H., & Stone, G. P. (1955). The social significance of clothing in occupational life. East Lansing: Michigan State College, Agricultural Experiment Station. Goddard, N. & Ariel, B. (2014). "How much time should officers spend in nighttime economy hotspots? Lessons from a "Randomized Controlled Trial in Northern Ireland." Presented at the Annual American Society of Criminology (San Francisco, CA, November 18-20, 2014). Goldstein, H. (1979). Improving policing: a problem-oriented approach. Crime & Delinquency, 25(2), 236–258. Golub, A., Johnson, B. D., & Taylor, A. (2003). Quality-of-life policing: do offenders get the message? Policing: An International Journal of Police Strategies & Management, 26(4), 690–707. doi:10.1016/j.biotechadv.2011.08.021.Secreted. Groff, E. R., Taylor, R. B., Elesh, D. B., McGovern, J., & Johnson, L. (2014). Permeability across a metropolitan area: conceptualizing and operationalizing a macrolevel crime pattern theory. Environment and Planning A, 46, 129–152. doi:10.1068/a45702. Guerette, R.T. (2009). Analyzing crime displacement and diffusion. Problem-oriented guides for police: Problem-solving tools series. Washington, D.C. Retrieved from http://www.popcenter.org/tools/displacement/print/. Hart, T., & Zandbergen, P. (2014). Kernel density estimation and hot spot mapping: examining the influence of interpolation method, grid cell size, and bandwidth on crime forecasting. Policing: An International Journal of Police Strategies & Management, 37(2), 305–323. Hauser, M. (2003). Primate cognition. In M. Gallagher & R. J. Nelson (Eds.), Handbook of psychology, biological psychology (Vol. 3) (pp. 561–594). Hoboken: Wiley. Heaton, P. (2010). Understanding the effects of antiprofiling policies. The Journal of Law and Economics, 53(1), 29–64. doi:10.1086/649645. Her Majesty's Inspectorate of Constabulary. (2010). Anti-social Behaviour: Stop the rot (pp. 1–15). London: Her Majesty's Inspectorate of Constabulary Innes, M. (2005). Why "soft" policing is hard: on the curious development of reassurance policing, how it became neighbourhood policing and what this signifies about the politics of police reform. Journal of Community & Applied Social Psychology, 15, 156–169. doi:10.1002/casp.818. Jackson, J., & Kuha, J. (2015). How theory guides measurement: examples from the study of public attitudes toward crime and policing. In T. S. Bynum & B. M. Huebner (Eds.), Handbook on measurement issues in criminology and criminal justice (pp. 1–34). Hoboken: Wiley Johnson, R. R. (2001). The psychological influence of the police uniform. FBI Law Enforcement Bulletin, 70(3), 27–32. Retrieved from http://www2.fbi.gov/publications/leb/2001/mar01leb.pdf. Johnston, L. (2006). Diversifying police recruitment? The deployment of police community service officers in London. Howard Journal of Criminal Justice, 45(4), 388–402. doi:10.1111/j.1468-2311.2006.00430.x. Kleck, G. (2014). Deterrence: Actual versus perceived risk of punishment. In G. J. N. Bruisma & D. L. Weisburd (Eds.), Encyclopedia of criminology and criminal justice. New York: Springer. Kleck, G., & Barnes, J. C. (2010). Do more police lead to more crime deterrence? Crime & Delinquency, 20(10), 1–23. doi:10.1177/0011128710382263. Koper, C. S. (1995). Just enough police presence: reducing crime and disorderly behavior by optimizing patrol time in crime hot spots. Justice Quarterly, 12(4), 649–672. Koper, C. S. (2014). Assessing the practice of hot spots policing: survey results from a national convenience sample of local police agencies. Journal of Contemporary Criminal Justice, 30(2), 123–146. doi:10.1177/1043986214525079. Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: "seizing" and "freezing.". Psychological Review, 103(2), 263–283. doi:10.1037/0033-295X.103.2.263. Lochner, L. (2003). Individual perceptions of the criminal justice system. Cambridge: Harvard University Press. Logan, C. H. (1972). General deterrent effects of imprisonment. Social Forces, 51(1), 64–73. Loughran, T. A., Paternoster, R., Piquero, A. R., & Pogarsky, G. (2011). On ambiguity in perceptions of risk:implications for criminal decision making and deterrence*. Criminology, 49(4), 1029–1061. doi:10.1111/j.1745-9125.2011.00251.x. Loughran, T. A., Piquero, A. R., Fagan, J., & Mulvey, E. P. (2012a). Differential deterrence: studying heterogeneity and changes in perceptual deterrence among serious youthful offenders. Crime & Delinquency, 20(10), 1–25. doi:10.1177/0011128709345971. Loughran, T. A., Pogarsky, G., Piquero, A. R., & Paternoster, R. (2012b). Re-examining the functional form of the certainty effect in deterrence theory. Justice Quarterly, 29(February 2015), 712–741. doi:10.1080/07418825.2011.583931. Marsden, S. (2013, July 18). Police "too busy" to walk the streets and prevent crime. London The Telegraph. Matsueda, R. L. (2013). Rational choice research in criminology: A multi-level framework. In R. Wittek, T. Snijders, & V. Nee (Eds.), The handbook of rational choice social research. Stanford: Stanford University Press. McCarthy, B. (2002). New economics of sociological criminology. Annual Review of Sociology, 28(2002), 417–442. doi:10.1146/annurev.soc.28.110601.140752. McCulloch, C. E., Searle, S. R., & Neuhaus, J. M. (2008). Generalized linear mixed models. Hoboken: Wiley. McGarrell, E.F., Chermak, S., & Weiss, A. (2002). Reducing Gun Violence: Evaluation of the Indianapolis Police Department's Directed Patrol Project. Washington, DC: U.S. Department of Justice Nagin, D. S. (1998). Criminal deterrence research at the outset of the twenty-first century. Crime and Justice, 23, 1–42. Nagin, D. S. (2013a). Deterrence in the twenty-first century. Crime and Justice, 42(1), 199–263. Article ADS MathSciNet Google Scholar Nagin, D. S. (2013b). Deterrence: a review of the evidence by a criminologist for economists. Annual Review of Economics, 5(1), 83–105. Nagin, D. S., Solow, R. M., & Lum, C. (2015). Deterrence, criminal opportunities and police. Criminology, 53, 74–100. Nickels, E. (2008). Good guys wear black: uniform color and citizen impressions of police. Policing: An International Journal of Police Strategies & Management, 30(1), 77–92. Parks, R. B., Matrofski, S. D., Dejong, C., & Gray, M. K. (1999). How officers spend their time with the community. Justice Quarterly, 16(3), 483–518. Paternoster, R. (2010). How much do we really know about criminal deterrence? The Journal of Criminal Law and Criminology, 100(3), 765–824. Peterborough City Council. (2013). Population and Dwelling Stock Estimates 2001–2012. Peterborough: Peterborough City Council. Pierce, C. S. (1885). A perspective on the algebra of logic. American Journal of Mathematics, 7(2), 180–196. doi:10.2989/16073606.2011.622856. Pierce, G. L., Spaar, S., & Briggs. L.R. (1988). The character of police work: Strategic and tactical implications. Unpublished MS., Center for Applied Social Research, Northeastern University. President's Commission on Law Enforcement and Administration of Justice, Task Force On the Police. (1967) Task Force Report: The Police. The President's Commission on Law Enforcement and Administration of Justice. Washington, DC: Government Printing Office. Ratcliffe, J. H., Taniguchi, T., & Taylor, R. B. (2009). The crime reduction effects of public CCTV cameras: a multi‐method spatial approach. Justice Quarterly, 26(4), 746–770. doi:10.1080/07418820902873852. Ratcliffe, J. H., Taniguchi, T., Groff, E. R., & Wood, J. D. (2011). The Philadelphia foot patrol experiment: a randomized controlled trial of police patrol effectiveness in violent crime hot spots. Criminology, 49(3), 795–831. doi:10.1111/j.1745-9125.2011.00240.x. Reiss, A. J., Jr. (1971). The police and the public. New Haven: Yale University Press. Richards, P., & Tittle, C. R. (1982). Socioeconomic status and perceptions of personal arrest probabilities. Criminology, 20(3–4), 329–346. Rosenfeld, R., Deckard, M. J., & Blackburn, E. (2014). The effects of directed patrol and self-initiated enforcement on firearm violence: a randomized controlled study of hot spot policing. Criminology, 52(3), 428–449. doi:10.1111/1745-9125.12043. Sampson, R. J. (2010). Gold standard myths: observations on the experimental turn in quantitative criminology. Journal of Quantitative Criminology, 26, 489–500. doi:10.1007/s10940-010-9117-3. Sampson, R. J., & Raudenbush, S. W. (1999). Systematic social observation of public spaces: a new look at disorder in urban neighborhoods. American Journal of Sociology, 105(3), 603–651. doi:10.1086/210356. Sherman, L. W. (1987). Repeat calls to police in Minneapolis. Washington, D.C.: Crime Control Institute. Sherman, L. W. (1990). Police crackdowns: Initial and residual deterrence. In M. Tonry & N. Morris (Eds.), Crime and justice: A review of research (Volume 12) (Vol. 12, pp. 1–48). Chicago: University of Chicago Press. Sherman, L. W. (1993). Defiance, deterrence, and irrelevance: a theory of the criminal sanction. Journal of Research in Crime and Delinquency, 30(4), 445–473. Sherman, L. W. (1995). Hot spots of crime and criminal careers of place. In J. Eck & D. Weisburd (Eds.), Crime and place: Crime prevention studies 4 (pp. 35–52). Monsey: Willow Tree. Sherman, L. W. (2007). The power few: experimental criminology and the reduction of harm. Journal of Experimental Criminology, 3(4), 299–321. doi:10.1007/s11292-007-9044-y. Sherman, L. W. (2013). The rise of evidence-based policing: targeting, testing, and tracking. Crime and Justice, 42(1), 377–451. doi:10.1086/670819. Sherman, L. W., & Eck, J. E. (2002). Policing for prevention. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence based crime prevention (Revised). New York: Routledge. Sherman, L. W., & Rogan, D. P. (1995a). Deterrent effects of police raids on crack houses: a randomized, controlled experiment. Justice Quarterly, 12(4), 755–782. Sherman, L. W., & Rogan, D. P. (1995b). Effects of gun seizures on gun violence: "Hot spots" patrol in Kansas city. Justice Quarterly, 12(4), 673–693. Sherman, L. W., & Weisburd, D. L. (1995). General deterrent effects of police patrol in crime "hot spots": a randomized, controlled trial. Justice Quarterly, 12(4), 625–648. Sherman, L. W., Gartin, P. R., & Buerger, M. E. (1989). Hot spots of predatory crime: routine activities and the criminology of place. Criminology, 27(1), 27–56. Sherman, L. W., Williams, S., Ariel, B., Strang, L., Wain, N., Slothower, M., & Norton, A. (2014). An integrated theory of hot spots patrol strategy: implementing prevention by scaling up and feeding back. Journal of Contemporary Criminal Justice, 1–28. Sherman, L. W., Neyroud, P.W., & Neyroud, E. (2016). The Cambridge Crime Harm Index (CHI): Measuring total harm from crime based on sentencing guidelines. Policing (forthcoming). Shi, L. (2009). The limit of oversight in policing: evidence from the 2001 cincinnati riot. Journal of Public Economics, 93, 99–113. doi:10.1016/j.jpubeco.2008.07.007. Skogan, W. G., & Frydl, K. (2004). skoga. In Fairness and Effectiveness in Policing: The Evidence. Washington, DC.: The National Academy Press. Skogan, W. G., & Hartnett, S. M. (1997). Community policing, Chicago style. Oxford / New York: Oxford University Press. Smith, D.A. (1986). The neighborhood context of police behavior. Crime and justice, 313–341. Sorg, E. T., Wood, J. D., Groff, E. R., & Ratcliffe, J. H. (2014). Boundary adherence during place-based policing evaluations: a research note. Journal of Research in Crime and Delinquency, 51, 377–393. doi:10.1177/0022427814523789. Spybrook, J., Raudenbush, S. W., Liu, X., Congden, R., & Martinez, A. (2013). Optimal Design for Longitudinal and Multilevel Research [Computer Software] Stevens, S. S. (1957). On the psychophysical law. The Psychological Review, 64(3), 153–181. doi:10.1121/1.1936487. Tankebe, J. (2013). Viewing things differently: the dimensions of public perceptions of police legitimacy. Criminology, 51(1), 103–135. doi:10.1111/j.1745-9125.2012.00291.x. Taylor, B. G., Koper, C. S., & Woods, D. J. (2010). A randomized controlled trial of different policing strategies at hot spots of violent crime. Journal of Experimental Criminology, 7(2), 149–181. doi:10.1007/s11292-010-9120-6. Telep, C. W., & Weisburd, D. L. (2014). Hot spots and place-based policing. In G. J. N. Bruisma & D. L. Weisburd (Eds.), Encyclopedia of criminology and criminal justice (pp. 2352–2363). New York: Springer. Telep, C. W., Mitchell, R. J., & Weisburd, D. L. (2012). How Much Time Should the Police Spend at Crime Hot Spots? Answers from a Police Agency Directed Randomized Field Trial in Sacramento, California. Justice Quarterly, 1–29. doi:10.1080/07418825.2012.710645 Telep, C. W., Weisburd, D. L., Gill, C. E., Vitter, Z., & Teichman, D. (2014). Displacement of crime and diffusion of crime control benefits in large-scale geographic areas: a systematic review. Journal of Experimental Criminology, 10, 515–548. doi:10.1007/s11292-014-9208-5. Tittle, C. R., & Rowe, A. R. (1974). Certainty of arrest and crime rates: a further test of the deterrence hypothesis. Social Forces, 52(4), 455–462. doi:10.2307/2576988. Tonry, M. (2008). Learning from the limitations of deterrence research. Crime and Justice, 37(1), 279–311. doi:10.1086/524825. Viscusi, W. K. (1986). The risks and rewards of criminal activity: a comprehensive test of criminal deterrence. Journal of Labor Economics, 4(3), 317–340. doi:10.1086/298113. Von Hirsch, A., Bottoms, A. E., Burney, E., & Wikström, P.-O. H. (1999). Criminal deterrence and sentence severity: An analysis of recent research. Oxford: Hart. Wain, N., & Ariel, B. (2014). Tracking of police patrol. Policing: A Journal of Policy and Practice, 8(3), 274–283. doi:10.1093/police/pau017. Wakefield, A. (2006). The value of foot patrol: A review of research. Washington, DC. Webster, J. A. (1970). Police task and time study. The Journal of Criminal Law and Police Science, 61(1), 94–100. Weisburd, D. (2015). The law of crime concentration and the criminology of place. Criminology, 53(2), 133–157. Weisburd, D. L., & Amram, S. (2014). The law of concentrations of crime at place: the case of Tel Aviv-Jaffa. Police Practice and Research, 15, 101–114. doi:10.1080/15614263.2013.874169. Weisburd, D. L., & Green, L. (1994). Defining the street-level drug market. In D. L. MacKenzie & C. D. Uchida (Eds.), Drugs and crime: Evaluating public policy initiatives (pp. 61–76). Thousand Oaks: Sage Publications. Weisburd, D. L., & Green, L. (1995). Policing drug hot spots: the Jersey City drug market analysis experiment. Justice Quarterly, 12(4), 711–735. Weisburd, D. L., Bushway, S. D., Lum, C., & Yang, S.-M. (2004). Trajectories of crime at places: a longitudinal study of street segments in the City of Seattle. Criminology, 42(2), 283–321. Weisburd, D. L., Wyckoff, L. A., Ready, J., Eck, J. E., Hinkle, J. C., & Gajewski, F. (2006). Does Crime Just Move Around the Corner? A Controlled Study of Spatial Displacement and Diffusion of Crime Control Benefits. Criminology, 44(3), 549–592. Weisburd, D. L., Telep, C. W., & Braga, A. A. (2010). The importance of place in policing: empirical evidence and policy recommendations (pp. 1–68). Stockholm: Brottsförebyggande rådet (BRÅ): Fritze [distributör]. Weisburd, D. L., Hinkle, J. C., Famega, C., & Ready, J. (2011). The possible "backfire" effects of hot spots policing: an experimental assessment of impacts on legitimacy, fear and collective efficacy. Journal of Experimental Criminology, 7(4), 297–320. doi:10.1007/s11292-011-9130-z. Weisburd, D. L., Groff, E. R., & Yang, S.-M. (2012). The criminology of place: Street segments and our understanding of the crime problem (p. 167). New York: Oxford University Press. Weisburd, D. L., Telep, C. W., & Lawton, B. a. (2013). Could Innovations in Policing have Contributed to the New York City Crime Drop even in a Period of Declining Police Strength?: The Case of Stop, Question and Frisk as a Hot Spots Policing Strategy. Justice Quarterly, (January), 1–25. doi:10.1080/07418825.2012.754920 Weisburd, D., Groff, E. R., Jones, G., Cave, B., Amendola, K. L., Yang, S. M., & Emison, R. F. (2015). The Dallas patrol management experiment: can AVL technologies be used to harness unallocated patrol time for crime prevention? Journal of Experimental Criminology, 11(3), 367–391. Wikström, P.-O. H., Oberwittler, D., Treiber, K., & Hardie, B. (2011). Breaking rules: The social and situational dynamics of young people's urban crime. Oxford: Oxford University Press. Williams, S. (2016). Do visits or time spent in hot spots patrol matter most? A randomised control trial in the west midlands police. M.St. Thesis in Applied Criminology and Police Management: University of Cambridge. Williams, K. R., & Hawkins, R. (1986). Perceptual research on general deterrence: a critical review. Law & Society Review, 20(4), 545–572. doi:10.2307/3053466. Wilson, J. Q. (1968). Varieties of police behavior: The management of law and order in eight communities. Cambridge, MA: Harvard University Press. Wilson, J. M., & Weiss, A. (2012). A performance-based approach to police staffing and allocation. Washington, D.C.: U.S. Department of Justice. Wright, R. T., & Decker, S. H. (1994). Burglars on the job: Streetlife and residential break-ins (pp. 54–55). Boston: Northeastern University Press. We wish to thank Cambridgeshire Constabulary and in particular the police community support officers for their hard work during the year of the experiment. We would particularly like to express our gratitude to Inspector Rob Hill, who has championed "Operation Style" with relentless dedication and attention to detail, as well as Mr Andy Hebb and Mr Dan Vajzovic for their inspiring leadership and desire to implement evidence-based policing. We also wish to thank David Weisburd for his insightful comments on earlier versions of this paper. Institute of Criminology, University of Cambridge, Sidgwick Avenue, Cambridge, CB3 9DA, UK Barak Ariel, Cristobal Weinborn & Lawrence W Sherman Institute of Criminology, Faculty of Law, Hebrew University, Mount Scopus, Jerusalem, 91905, Israel Barak Ariel Department of Criminology and Criminal Justice, University of Maryland, 2220 Samuel J. LeFrak Hall, College Park, MD, 20742, USA Lawrence W Sherman Cristobal Weinborn Correspondence to Barak Ariel. Ariel, B., Weinborn, C. & Sherman, L.W. "Soft" policing at hot spots—do police community support officers work? A randomized controlled trial. J Exp Criminol 12, 277–317 (2016). https://doi.org/10.1007/s11292-016-9260-4 Issue Date: September 2016 Arrest powers Police Community Support Officers
CommonCrawl
After deleting the multiples of $2$ and multiples of $3$ from list of integers from $1$ to $N$, why are a fifth of the numbers still multiples of 5? I was reading an explanation about there being infinitely many primes that started off like this: Say to the contrary there are finitely many and $p$ is the largest prime. Then let $N$ be the product of all the primes, so $N=2\times3\times5\times7\times\ldots\times p$. Of the numbers in the list $1,2,3,4,5,\ldots,N-2,N-1,N$, half of them are divisible by $2$. We cross those numbers off the list, and we have $1,3,5,7,9$ and so on. Then of those numbers in this list, a third of them are multiples of $3$. At first I thought the spacing of the numbers would make it so that not every 3 consecutive numbers in the list would have exactly 1 multiple of 3, but I reasoned that every 3 consecutive odd numbers $2n+1,2n+3,2n+5$ must have a multiple of 3 because $2n+1$ is either $\equiv0,1$ or $2\pmod3$. Okay, then from this list of only odd numbers, we delete all the multiples of $3$, which I now believe is a third of the numbers. Then the book claims that of this new list (with all multiples of $2$ and all multiples $3$ crossed out), exactly a fifth of them are multiples of $5$. Now I am stuck as to why exactly a fifth of these numbers are multiples of 5. I understand that a fifth of the numbers from the original list $1,2,3,\ldots, N$ are multiples of $5$, but it seemed to me that the uneven spacing of this list, with the multiples of $2$ and $3$ deleted, might make it so that we aren't guaranteed a multiple of $5$ every five consecutive numbers anymore. How do we know a fifth of the numbers in the new list are multiples of $5$? (The explanation goes on to do this with all the primes until $p$.) combinatorics elementary-number-theory prime-numbers anonanon444anonanon444 $\begingroup$ See en.wikipedia.org/wiki/Chinese_remainder_theorem. $\endgroup$ – joriki Jul 11 '18 at 10:08 $\begingroup$ See also en.wikipedia.org/wiki/Euler%27s_totient_function. $\endgroup$ – joriki Jul 11 '18 at 11:39 Think of the pattern of numbers modulo $30$ (we choose $30$ because $30=2\times3\times5$). After removing multiples of $2$ and $3$ you are left with $30n+1$, $30n+5$, $30n+7$, $30n+11$, $30n+13$, $30n+17$, $30n+19$, $30n+23$, $30n+25$, $30n+29$ Of these $10$ numbers just $2$ are multiples of 5 - the second one $30n+5$ and the ninth one $30n+25$. Since this pattern repeats, one fifth of the remaining numbers are multiples of 5, even though they are not evenly spaced among the remaining numbers. gandalf61gandalf61 $\begingroup$ I understand now, thank you very much. The book claims in general, $\frac{1}{p}$ of the numbers left after we've crossed out 2,3, etc are multiples of $p$. I guess we have to find the largest $k$ such that $p^{k}\le2.3\ldots p$. (So in this case it's 2, but I don't know in general how to find that.) Then we divide $k$ by the number of numbers from 0 to $p$ which are left after we've done all the crossing out. (So in this case it's 10, and in general, I only have an idea using inclusion-exclusion.) Then I guess we somehow end up with $\frac{1}{p}$. $\endgroup$ – anonanon444 Jul 11 '18 at 22:09 Does the book place any requirements on $N$? Let's say $N = 30$. Then, after crossing out the nontrivial multiples of $2$ and $3$, we have: $$\require{cancel} \begin{eqnarray} 1 & 2 & 3 & \cancel{4} & 5 & \cancel{6} & 7 & \cancel{8} & \cancel{9} & \cancel{10} \\ 11 & \cancel{12} & 13 & \cancel{14} & \cancel{15} & \cancel{16} & 17 & \cancel{18} & 19 & \cancel{20} \\ \cancel{21} & \cancel{22} & 23 & \cancel{24} & 25 & \cancel{26} & \cancel{27} & \cancel{28} & 29 & \cancel{30} \\ \end{eqnarray}$$ Ignore $1$ for the moment. The numbers that are not crossed out at this stage are $2, 3, 5, 7, 11, 13, 17, 19, 23, 25, 29$. So $10$ is already crossed out on account of being a multiple of $2$, as is $20$, while $15$ is crossed out on account of being a multiple of $3$. We have eleven numbers out of thirty not crossed out. Two of them are multiples of $5$, namely $25$ and $5$ itself. Two out of eleven is not exactly one fifth but it is close. We can make that ratio smaller if we take $N$ up to $34$ and cross out $32, 33, 34$. The Short OneThe Short One Not the answer you're looking for? Browse other questions tagged combinatorics elementary-number-theory prime-numbers or ask your own question. The sum of all the numbers $n \le k$ that are multiples of $a, b, c, d,\ldots$? $16$ natural numbers from $0$ to $9$, and square numbers: how to use the pigeonhole principle? Small primes attract large primes Number of ways to add multiples of numbers from a set of k numbers while all numbers are never multiplied. Sum of numbers between consecutive multiple numbers of $N$ proof What is so great about 7? What multiples of $d$ are still multiples of $d$ when they have their digits reversed? How many different sets of 6 and 7 different numbers can we list out from 11,13,18,19,19,20,23,25? Generating pairs of primes from the 2 previous primes. Are there an infinite number of primes a constant distance away from multiples of any given number?
CommonCrawl
Volume 5, Number 2, 2000 Kozlov V. V. Billiards, Invariant Measures, and Equilibrium Thermodynamics The questions of justification of the Gibbs canonical distribution for systems with elastic impacts are discussed. A special attention is paid to the description of probability measures with densities depending on the system energy. Citation: Kozlov V. V., Billiards, Invariant Measures, and Equilibrium Thermodynamics, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 129-138 Bolotin S. V. Infinite number of homoclinic orbits to hyperbolic invariant tori of hamiltonian systems A time-periodic Hamiltonian system on a cotangent bundle of a compact manifold with Hamiltonian strictly convex and superlinear in the momentum is studied. A hyperbolic Diophantine nondegenerate invariant torus $N$ is said to be minimal if it is a Peierls set in the sense of the Aubry–Mather theory. We prove that $N$ has an infinite number of homoclinic orbits. For any family of homoclinic orbits the first and the last intersection point with the boundary of a tubular neighborhood $U$ of $N$ define sets in $U$. If there exists a compact family of minimal homoclinics defining contractible sets in $U$, we obtain an infinite number of multibump homoclinic, periodic and chaotic orbits. The proof is based on a combination of variational methods of Mather and a generalization of Shilnikov's lemma. Citation: Bolotin S. V., Infinite number of homoclinic orbits to hyperbolic invariant tori of hamiltonian systems, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 139-156 Pronin A. V., Treschev D. V. Continuous Averaging in Multi-frequency Slow-fast Systems It is well-known that in real-analytic multi-frequency slow-fast ODE systems the dependence of the right-hand sides on fast angular variables can be reduced to an exponentially small order by a near-identical change of the variables. Realistic constructive estimates for the corresponding exponentially small terms are obtained. Citation: Pronin A. V., Treschev D. V., Continuous Averaging in Multi-frequency Slow-fast Systems, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 157-170 Fedorov Y. N. Integrable Systems, Poisson Pencils, and Hyperelliptic Lax Pairs In the modern approach to integrable Hamiltonian systems, their representation in the Lax form (the Lax pair or the $L$–$A$ pair) plays a key role. Such a representation also makes it possible to construct and solve multi-dimensional integrable generalizations of various problems of dynamics. The best known examples are the generalizations of Euler's and Clebsch's classical systems in the rigid body dynamics, whose Lax pairs were found by Manakov [10] and Perelomov [12]. These Lax pairs include an additional (spectral) parameter defined on the compactified complex plane or an elliptic curve (Riemann surface of genus one). Until now there were no examples of $L$–$A$ pairs representing physical systems with a spectral parameter running through an algebraic curve of genus more than one (the conditions for the existence of such Lax pairs were studied in [11]). In the given paper we consider a new Lax pair for the multidimensional Manakov system on the Lie algebra $so(m)$ with a spectral parameter defined on a certain unramified covering of a hyperelliptic curve. An analogous $L$–$A$ pair for the Clebsch–Perelomov system on the Lie algebra $e(n)$ can be indicated. In addition, the hyperelliptic Lax pair enables us to obtain the multidimensional generalizations of the classical integrable Steklov–Lyapunov systems in the problem of a rigid body motion in an ideal fluid. The latter is known to be a Hamiltonian system on the algebra $e(3)$. It turns out that these generalized systems are defined not on the algebra $e(n)$, as one might expect, but on a certain product $so(m)+so(m)$. A proof of the integrability of the systems is based on the method proposed in [1]. Citation: Fedorov Y. N., Integrable Systems, Poisson Pencils, and Hyperelliptic Lax Pairs, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 171-180 Sevryuk M. B. On the Convergence of Coordinate Transformations in the KAM Procedure We study the $C^r$-convergence of the compositions $W_n=U_1U_2\cdots U_n$ where mappings $U_k$ tend to the identity transformation in the $C^r$-topology as $k \to\infty$. The cases $r = 0$ and $1 \leqslant r < +\infty$ turn out to be drastically different. Citation: Sevryuk M. B., On the Convergence of Coordinate Transformations in the KAM Procedure, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 181-188 Borisov A. V., Kilin A. A. Stability of Thomson's Configurations of Vortices on a Sphere In this work stability of polygonal configurations on a plane and sphere is investigated. The conditions of linear stability are obtained. A nonlinear analysis of the problem is made with the help of Birkhoff normalization. Some problems are also formulated. Citation: Borisov A. V., Kilin A. A., Stability of Thomson's Configurations of Vortices on a Sphere, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 189-200 Sadetov S. T. On Algebraic Integrals of the Motion of Point over a Quadric in Quadratic Potential The motion of a point over an $n$-dimensional nondegenerate quadric in one-dimentioned quadratic potential under the assumption that there exist $n+1$ mutually orthogonal planes of symmetry is considered. It is established, that all cases of the existence of an algebraic complete commutative set of integrals are exhausted by classical ones. The question whether the integrability due to Liouville is inherited by invariant symplectic submanifolds is studied. In algebraic category for submanifolds of dimension $4$ such integrability is valid. Citation: Sadetov S. T., On Algebraic Integrals of the Motion of Point over a Quadric in Quadratic Potential, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 201-212 Neishtadt A. I. On the Accuracy of Persistence of Adiabatic Invariant in Single-frequency Systems A modified method of A.A. Slutskin (1963) of analytical extension to the complex time plane of solutions of a single-frequency nonlinear Hamiltonian system with slowly varying parameters is considered. On the basis of this method a proof of the estimate for the accuracy of persistence of adiabatic invariant due to A.A. Slutskin is given for such systems. Citation: Neishtadt A. I., On the Accuracy of Persistence of Adiabatic Invariant in Single-frequency Systems, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 213-218 Karapetyan A. V. On Construction of the Effective Potential in Singular Cases It is known that the problem of an investigation of invariant sets (in particular stationary motions) of mechanical systems with symmetries can be reduced to the problem of the analysis of the effective potential [1-11]. The effective potential represents the minimum of the total mechanical energy with respect to quasivelocities on fixed levels of Noether's integrals corresponding to symmetries of the system. The effective potential is a function in the configuration space depending on constants of Noether's integrals. This function is defined in such points of the configuration space where Noether's integrals independent and can have singularities at some points where these integrals are dependent. Citation: Karapetyan A. V., On Construction of the Effective Potential in Singular Cases, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 219-224 Ziglin S. L. On the Nonintegrability of a Dynamical System of the General Relativity The absence of an additional meromorphic first integral of a Hamiltonian system with two degrees of freedom emerging in describing of the Friedman cosmological models with the coupled scalar field is proved. Citation: Ziglin S. L., On the Nonintegrability of a Dynamical System of the General Relativity, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 225-226 Rudnev M., Wiggins S. On a Homoclinic Splitting Problem We study perturbations of Hamiltonian systems of $n+1$ degrees of freedom $(n \geqslant 2)$ in the real-analytic case, such that in the absence of the perturbation they contain a partially hyperbolic (whiskered) $n$-torus with the Kronecker flow on it with a Diophantine frequency, connected to itself by a homoclinic exact Lagrangian submanifold (separatrix), formed by the coinciding unstable and stable manifolds (whiskers) of the torus. Typically, a perturbation causes the separatrix to split. We study this phenomenon as an application of the version of the KAM theorem, proved in [13]. The theorem yields the representations of global perturbed separatrices as exact Lagrangian submanifolds in the phase space. This approach naturally leads to a geometrically meaningful definition of the splitting distance, as the gradient of a scalar function on a subset of the configuration space, which satisfies a first order linear homogeneous PDE. Once this fact has been established, we adopt a simple analytic argument, developed in [15] in order to put the corresponding vector field into a normal form, convenient for further analysis of the splitting distance. As a consequence, we argue that in the systems, which are Normal forms near simple resonances for the perturbations of integrable systems in the action-angle variables, the splitting is exponentially small. Citation: Rudnev M., Wiggins S., On a Homoclinic Splitting Problem, Regular and Chaotic Dynamics, 2000, vol. 5, no. 2, pp. 227-242
CommonCrawl
Standard Notes HomeGuestbookSubscribe An end-to-end encrypted note-taking application for digitalists and professionals. @StandardNotesstandardnotes.com 24,406 words An update on early pricing and roadmap November 16, 2021•871 words This is a trimmed down version of an email that was sent out to most (but mistakenly not all) our users on November 3, 2021. The End of Early Pricing Early Pricing is our 5-year plan, which was marked down at a steep discount as a sort of "capital raise" program---you give us a single relatively large sum in advance, and we give you prolonged service. But because the program was offered at such a discount, it couldn't be sustainable in the long run. We're glad to announce the time has finally come to graduate out of early pricing, and into a more long-term focused, sustainable pricing model. Next week, we'll be launching our new subscription plan lineup, which features different plans which offer different features at different price points. Before we describe those to you, it's important to note that if you already have a pre-existing subscription, these changes or prices do not affect you. Existing subscribers are always taken care of. What's Changing Currently, we only have 1 subscription plan offered at different time commitments. However, we often get the feedback that some users want only one particular feature, and don't want to pay for the whole offering. Our new subscription system consists of 3 different plans that each cater to a different use case. If you are an existing subscriber, whether monthly, yearly, or 5-yearly, you automatically get the highest Pro Plan, with no price increase! You're locked into the price you signed up with, and your plan will continue to renew at that price indefinitely until you cancel. You can also extend your plan today with another X years at current pricing. Our new release next week also features our brand new unified subscription architecture, so that you no longer need two (potentially confusing) accounts to use Standard Notes. And you no longer need to manually activate your paid benefits and install them one-by-one---everything just works seamlessly! It's taken a tremendous amount of architectural and engineering efforts to pull this off, and we couldn't be happier to finally be shipping this. With this new architecture, we're now able to finally begin shipping and working on user-facing features which we can rightfully charge for depending on complexity. This takes us to our roadmap. We have a few major things we have our sights on for the very near future: 1. Built-in encrypted files. Our current solution for files uses FileSafe with third-party cloud providers, which isn't the best out-of-box user experience. Our new experience will focus on built-in file handling available on all our applications and synced directly to your existing Standard Notes account. Your storage capacity will depend on your plan. For example, the Plus Plan might allow for 5GB of storage, while the Pro Plan might allow for 25GB storage (exact storage specifications TBD). 2. Native folders for web, desktop, and mobile. The current nestable folders solution requires an extension that isn't available on mobile. Nativizing this feature will allow for a much more pleasant user experience. 3. Multiple account support. The ability to quickly and easily switch between multiple accounts, perhaps if you have one account you use for work and one account for personal notes. 4. Tabbed editor support. The ability to open and edit more than one note at a time, each in a separate tab, similar to how code editors or web browsers work today. 5. Offline editors on mobile. Currently you need an online connection to use our speciality editors (non-plaintext) on mobile. Offline editors on mobile will download editors to your device, like we do on desktop. 6. Large database load improvements for mobile. Currently when you have many thousand notes on mobile, startup decryption experience may vary depending on your device specifications. New advancements in React Native have however potentially opened the door for drastic performance improvement opportunities. 7. Continuing to improve our UI/UX. You may have already noticed new design elements in our web and desktop app. This includes a redesigned account menu and a brand new general purpose Preferences pane to accommodate the variety of configuration options we have now and will expand into in the future. The rest of our applications have already been redesigned in our private Figma, just waiting to be implemented in code! We can't wait to continue gradually releasing small doses of this redesign, until it is fully complete. We have a new website! Check it out at standardnotes.com. We caved and now we have a Discord. We still have Slack too. But that's ok. Each platform encourages a different culture and invites a new audience. Standard Notes is crypto friendly, and the crypto community loves Discord, so we'd love to open our doors to them. You can also follow us on Twitter (@StandardNotes). We don't tweet much because we're always working. But maybe one day we'll hire one of those hip social media people that tweet in lowercase and pick internet fights with Wendy's. That's all for now. Thanks for being on this journey with us. We look forward to continuing to build out the best, most secure encrypted note-taking application available. It's hard work. But we're doing it. And we love every second of it. Why TokenVault is going public source June 8, 2021•255 words In investing time and resources into improving TokenVault and other editors, we felt uncertain about the fact that there are already open-source clones offering free (but untrusted) distribution of our paid extensions. This is certainly within their rights, as our custom editors are licensed with AGPLv3. For context, Standard Notes clients and sync server have always been released under an open-source license. Extensions have had a different history, as their primary purpose is precisely a way to monetize without impeding on core experience. They started as public-source, later changed to open-source, and today take another shift, but one we think is nuanced, reasonable, and hopefully, fair. Editors that we develop in-house mostly from scratch will house a public-source, but not open-source presence. This means you can browse the source code online, and even use it for personal use, but you cannot redistribute it for free or for profit. Editors that are derived and are mostly wrappers on top of existing open-source software will retain either the license of the majority share library, or AGPLv3. Allowing us to protect our investments in resources allocated to improving editors also allows us to further re-invest revenue into improving our primary-focus open-source clients and server. Our goal is building the best way to store and manage your personal notes and data. End-to-end encryption, open-source, and business sustainability are fundamental pillars of our product, and we hope you'll continue to trust us to adjust the levers in ways we deem important to our business, while keeping the scales tipped at-large towards open-source. Standard Notes 3.6 Update March 2, 2021•1,033 words We're excited to launch version 3.6 of our applications on every platform. This release focuses on simplifying access control measures, as well as giving you the power to review and revoke other devices signed into your account. You'll now have the ability to review which devices are currently signed into your account. You can choose to Revoke an existing session. This will prevent that device from having access to your account. Revoking a session also removes all account data from that device. (Data removal feature requires all devices to be running v3.6+) Prior to version 3.6, protecting certain actions, like viewing protected notes or downloading a backup, required you to configure complicated settings under the Manage Privileges screen. These actions were not protected by default until you went out of your way to properly enable them. In version 3.6, we're happy to introduce a change that will make protections a much more seamless experience. There are no longer any settings required to make protection work. Instead, the following actions are automatically protected: • Viewing a protected note • Downloading an account backup • Other important actions, such as removing your application passcode or revoking a session This means that to perform any of the above actions, you'll be asked to enter your application passcode (or biometrics on mobile) first. If an application passcode is not configured, you'll be asked to verify with your account password. (If you are not using Standard Notes with an account, and you do not have a passcode/biometrics configured, then these actions will proceed without verification.) You'll also have the option of "remembering" a protected session for a period of time, like 5 minutes or 1 week. When you choose for the application to remember, you won't be asked to authenticate protected actions again until the selected time period has elapsed. If you choose to remember for 1 week, but change your mind afterward and want protections to be re-enabled immediately, you can do so from the Account/Settings menu. Protected Notes Prior to version 3.6, when you protected a note, we displayed a very prominent indicator on that note in your list of notes: However, it's often the case that when you protect a note, it's more sensitive than usual. In that case, rather than drawing attention to it, you would in fact desire the opposite: if not totally hidden, then at least not vibrant and conspicuous. In 3.6, Protected notes have a much more subtle indicator: The ideal experience is essentially that when scrolling through your long list of notes, your eyes shouldn't be able to immediately pick out which notes are protected and which aren't. This can be a particularly useful privacy feature if you have your application open in a public space, such as a school or workplace. You might be wondering, as we did, why not take this a step further and make protected notes completely indistinguishable from regular notes, and not have any indicator at all? The reason primarily is for your own peace of mind: it can be somewhat alarming if you protect a note, return to it a week later, not see any special status on it in your list of notes, panic, and think, did I not protect this!? So for now we find the more subtle approach to be the most balanced one. In case you missed it, we also announced the completion of two new major third-party security audits performed by Trail of Bits and Cure53. These extensive audits focused on both our application and server codebases, as well as our detailed encryption specification and protocol. Read more: Standard Notes Completes Penetration Test and Cryptography Audit Version 3.6 completes another round of "foundational" updates we've been eager to ship. These updates focus on features that improve the core experience centered around privacy and security. Our roadmap for the remaining year consists of two major projects: Unifying our systems and architecture so that services such as Extended, our website, and Listed can communicate with each other in a more seamless manner. Currently you may notice that signing up for Extended, our paid subscription service, requires you to enter a separate email on our website (that may or may not be the same email you use to register for a notes account), then import a code into your app that activates your Extended benefits. We'd like for this process to be much simpler, so that there aren't many parts that you have to worry about. Unifying this architecture will have many numerous benefits and solve several long-standing issues with the upgrade experience. But, as you can imagine, it's a really big project. And we're already well underway. Files. This is a very important focus for us this year and beyond. Files are presently somewhat of a second-class citizen in our ecosystem, and requires configuring a few settings and linking an external cloud provider. We'd like to bring the same great user experience and reliability you've come to expect for your notes, to files. Imagine being able to open Standard Notes on your phone and seamlessly record a video or snap a photo that's fully encrypted, and then have that file appear and securely synced to all your other devices instantly? Imagine being able to tag these encrypted files, attach them to notes, and more. We're really excited about files, but, it may be our largest undertaking yet. This wraps up our new releases and roadmap update. We hope you enjoy using our most secure and private experience yet on all your devices. If you'd like to support our work and development—and unlock our full suite of productivity-enhancing features—you can purchase Extended, our paid subscription service. Extended unlocks editors including Secure Spreadsheets, TokenVault Authenticator, and a suite of Markdown and Code editors, as well as other powerful services such as daily email backups, extended note history, and more. As always, please don't hesitate to get in touch if you have any questions. You're also welcome to join our community Slack group and follow us on Twitter for more frequent updates. Standard Notes Completes Penetration Test and Cryptography Audit January 5, 2021•338 words We are pleased to announce the latest release of our encryption suite. This release uses the latest state-of-the-art, cryptographer-recommended algorithms for modern day encryption and key generation, designed to withstand the latest advances in cryptographic attacks and brute-forcing. For data encryption, our latest cryptography suite uses the XChaCha20-Poly1305 algorithm. This algorithm is presently the preferred algorithm in many modern-day encryption contexts, and ranks above any of the AES-suite algorithms, like AES-GCM and AES-CBC. For password based key derivation, our new release uses Argon2, a memory-hard algorithm. This is in comparison to PBKDF2, the previously and commonly used algorithm that has proven to be vulnerable to recent technological advances in specialized computer hardware, as demonstrated by cryptocurrency mining equipment, that can compute hashes very quickly. Because Argon2 is memory hard, each single guess at a hash requires around 70MB of memory. This makes it very, very expensive to mount a large scale attack and try to guess trillions of hashes. Guessing trillions of hashes using PBKDF2, however, is not nearly as expensive. The implementation of the latest advances in encryption technology make Standard Notes more robust, powerful, and secure than ever. These new releases are backed by two new security audits conducted by two of the world's leading cryptography research and testing firms: Cure53 and Trail of Bits. We engaged with Cure53 to conduct a penetration test of our entire ecosystem, including our cross-platform applications and server. Cure53 conducted a rigorous and thorough test, lasting multiple weeks, that helped ensure confidence in our ecosystem by finding any vulnerabilities in our environment. We also engaged with Trail of Bits to audit our new encryption release. This entailed auditing our specification, algorithms, and code implementation of the shared library we use in our applications to sync data and perform encryption and key generation. We are very pleased with the results of both audits, and their impact on making Standard Notes the most secure note-taking application available. You can visit our Audits page to learn more about these, and other, audits. Standard Notes as a Holiday Gift December 17, 2020•667 words A subscription to Standard Notes Extended is a wonderful and thoughtful gift for your friends, family, colleagues, and loved ones. The reasons are simple: you can use it to write about almost anything, anywhere, anytime, and for as long as you want. It's hard to find a gift so versatile, flexible, rugged, and yet affordable. Let's look at these reasons in more detail: Standard Notes Extended comes with a full suite of Markdown, code, and rich text editors, so you and your loved ones can write about almost anything you want. It's easy to make lists for simple and habitual tasks like buying groceries and working out, and you can store data that doesn't change often, such as secrets for two-factor authentication. Standard Notes has built-in end-to-end encryption, which means that your colleagues can store confidential, work-related information and your friends and family can keep private journals of fun and intimate moments. Standard Notes is secured with modern encryption, but it's still easy to share your notes when you want to. Extended comes with an Action Bar that makes it easy to copy your note, save it to a file, or email it directly to your friends, family, and colleagues. Standard Notes is integrated with Listed, a simple and popular blogging platform, so it's easy to blog with a custom domain and publish to private links. Standard Notes works on all major operating systems, so you can take your notes with you wherever you bring your devices. You can read and add to your notes from your computers at home, work, and school or while traveling on the subway, at an airport, or in a cafe. You can even take your notes with you to the bathtub or pool if your devices allow it. The Folders and Tags system in Extended is intuitive and powerful, so you can organize your notes to best suit your needs. With four dark themes and two light themes, Standard Notes is useful when working late into the night as well as throughout the day. The No Distraction theme makes it easier for you to focus on what matters: your thoughts and your content. The design of Standard Notes is, in one word, rugged. The apps are built for the long haul. They are sleek, slim, and built for longevity. Standard Notes Extended provides automated daily backups to your email, Dropbox, Google Drive, and OneDrive -- providers that you already know and use. The built-in Note History feature lets you revert your notes back to previous versions without a hassle. Notes are stored in plain text, making them easy to export and read without special software. It's a Great Deal There are many reasons to give someone a gift, but the best gifts are usually thoughtful ones. Sometimes, we choose a gift because we have a good idea that it is what the recipient wants. Other times, we choose a gift because we think it will make them happy or improve their life. A subscription to Standard Notes is one of those gifts. Standard Notes is a safe and reliable place to store thoughts, notes, and information. It is simple, useful, and built to last. A 5-year subscription to Standard Notes means that your gift will last for at least five years. Using it over time can provide benefits that last much longer. It's a great deal. Treat yourself and treat your loved ones. Get a subscription to Standard Notes. Give the gift of a private, more secure lifestyle → Our Holiday Sale, 35% off the 5-year plan, ends in a couple weeks. It's $1.61/month or $19.37/year billed at $96.85 every 5 years. Get 35% Off → Join our Slack and follow us on Twitter to get all the latest updates about Standard Notes. What is a pull request? September 8, 2020•667 words One of the main ways software developers contribute to free and open-source projects is by creating pull requests to fix bugs, add features, clarify documentation, and to address other issues. A pull request is a proposal to make specific changes to the source code of a project. Projects usually have multiple versions of their source code, and one of them is the main version. The maintainers of the main version often encourage other developers to contribute to their projects by creating pull requests. How do pull requests work? Pull requests typically have five parts: the issue, changes, discussion, approval, and merge. The first step to creating a pull request is to identify an issue with the existing source code for a project. Pull requests are meant to be reversible, so developers are encouraged to make each pull request focus on one issue or topic. For example, fixing a website's styling and updating its content can and should be separated into two separate pull requests. After identifying the issue, a developer creates a complete copy of the project's source code on their own computer. Since their copy is derived from another copy, their copy is known as a fork. The developer then proceeds to change their copy of the source code to address the issue they identified. When the developer is finished with their changes, they write a summary of their changes. The summary may include details about which issue the changes are meant to fix, an explanation for their approach to the issue, and a description of any testing they performed to ensure that the changes worked as intended. Then, the developer requests the maintainers to review and accept their changes. The developer and maintainers discuss any remaining questions about the pull request, such as whether the changes can be optimized or need further improvements. If the maintainers think that the pull request is ready, they can approve it and merge the changes into the main copy of the source code. The developer's pull request is granted and the developers "pull" the changes into the main copy. Why do people create pull requests? Each developer has their own reasons for contributing to free and open-source software. Here are a few common reasons: Prestige. When the maintainer of a project merges a developer's pull request into the source code of a project, the developer is permanently attributed as a contributor to that project. For example, the Standard Notes web app repository has 23 contributors at the time of this writing. Developers can accumulate fame and prestige within the developer community by making significant contributions to important and valuable open-source projects. This can help them build an audience and find more employment opportunities. Experience. Junior developers can gain experience and build their resumes by contributing to open-source projects with pull requests, and experienced developers can use them to practice their skills. This can also help developers find future employment. Generosity. Software developers are problem-solvers at heart and often enjoy sharing solutions for others to use. By sharing the solutions, more people can benefit from them. Contributing to free and open-source projects with pull requests is a way to give back to a community or project. Compatibility. Developers can create new features and fix bugs by modifying their own copy of a project to suit their own needs. However, they can ensure that the new features and bug fixes are compatible with future versions of the project by implementing them into its main source code. Pull requests also allow their feature to receive more critical review and attention. GitHub's full documentation on how to work with pull requests Wikipedia's entry on pull requests This post was originally published on the Standard Notes Knowledge Base. Standard Notes is a free, open-source, and end-to-end encrypted notes app. Encryption is for Everyone June 10, 2020•911 words People with wealth and power have many things that normal people do not. When they are sick, they have access to many of the best doctors and the best medical treatments. When they are well, they can afford to attend the most prestigious private universities and pay for their children to do the same. When they are in trouble, they can buy their way out with the help of big law firms. All the while, they leverage their private social networks to influence giant corporations and government officials to create laws, policies, and products that maintain their wealth and power generation after generation. The lives of normal people are much more difficult. They struggle to pay for their healthcare and education, and they rely on the free legal guidance provided by the government, if any at all. They influence the government with only their spare change, voices, and votes. The rich and powerful thrive while normal people struggle to survive. But there is one thing that people with wealth and power do not have better than normal people: encryption. In 2001, the United States National Institute of Standards and Technology (NIST) announced the Advanced Encryption Standard (AES) as a cryptographic algorithm that can be used by the U.S. government to protect sensitive electronic data. Today, AES is still widely used to protect personal data, digital communications, and other important information technology infrastructure. There are many ways to implement AES, and they are named in part after the sizes of their keys. The version that uses 256-bit keys is known as AES-256 and is the strongest version. Many free and open-source software programs such as Standard Notes and Cryptomator make it easy for people to use AES-256 to protect their privacy and personal information. With these programs, encryption can be used by anyone regardless of their sex, gender, race, ethnic group, religion, economic class, political party, criminal record, or national origin. In other words, encryption is a way for normal people to keep information from the economic and political elite. Such information could include facts and personal data that normal people could use to prevent the elite from further suppressing or infringing upon their rights. Encryption is a way for normal people to maintain what little power they have. Furthermore, people with wealth and power cannot buy better encryption. The world's largest computer networks cannot break AES-256 even though the algorithms were invented over two decades ago and there have been great advances in computing technology. The wealthy and powerful may be able to hire mathematicians, cryptographers, and computer scientists to create new algorithms and implement them in proprietary software programs, but no amount of money can give them better encryption. Algorithms need to be tested with time and software needs to be inspected by communities in order to be trustworthy. Practically speaking, the elite cannot create better encryption software than what is already free, fast, easy to use, and impossible to break. The widespread use and availability of a defensive tool as unbreakable as encryption software threatens the technological dominance that the economic and political elite have held for so long. Governments use it for themselves to protect their own secrets, such as those vital to "national security," but many of them try to limit access to encryption technology in order to surveill, censor, and otherwise control their constituents. Since they do not have the technical capacity to break encryption, they have to use social means to prevent its use. They create laws that ban its import and export and punish people who use it. They make software companies liable for how people use their products. Governments usually create these policies under the guise of trying to prevent criminals from doing bad things, but they are also the ones who determine who is a "criminal" and who is not. The policies they create also affect the technologies that normal people have access to, but normal people can use encryption in a variety of ways that are not harmful or morally wrong. As a result, the economic and political elite determine the rules of acceptable behavior for everyone except themselves. Therefore, attempts to limit access to encryption are attempts to further undermine the power of normal people. If you believe that normal people should have the power to protect their own private personal information, then you can help us maintain our power by acting on your beliefs. You can exercise your rights to freedom of speech, privacy, and encryption. You can tell your government representatives to reject legislation that would prevent its use. You can use, support, and share encryption technologies with others to spread awareness. Software programs like Standard Notes, Cryptomator, and Bitwarden are designed to protect your personal notes, files, and passwords with AES-256 encryption. They are all free to use and open-source. The right to use encryption is a fundamental human right as inalienable as the right to think freely in one's own mind. It is a tool that belongs to everyone, not just the economic and political elite. Help us protect our right to keep personal information private and our freedoms to think, speak, and communicate by standing up for encryption. What is Encryption? What is End-to-end Encryption? What is Free and Open-Source Software? Wikipedia entry on the Advanced Encryption Standard How to block ads and trackers in Safari for iOS Ads on the web are annoying and most trackers betray our privacy by giving third-parties information about the sites we visit and the topics we are interested in. These third-parties can then track us around the internet to sell us more ads, distort our search results, and give our browsing history to governments. When we block ads and trackers, websites are easier to read and faster to load, so we save time and bandwidth (data). Blocking ads and trackers is easy on desktop browsers thanks to extensions like uBlock Origin and Privacy Badger. These extensions are not available on iOS, but we can still block ads and trackers in Safari by downloading additional apps and enabling them in the Settings. The installation process is the same for each app. After installing the app from the App Store, visit Settings > Safari > Content Blockers and enable them. In the Settings, you may information about content blockers: Content blockers affect what content is loaded while using Safari. They cannot send any information about what was blocked back to the app. The following apps seem to be reliable content blockers for Safari: ‎Firefox Focus: Privacy browser App StoreMozilla ‎Better Blocker App StoreInd.ie ‎AdBlock Pro for Safari App StoreCrypto, Inc. Firefox Focus is free and open-source and is also available on Android. The source code for the Android and iOS apps are available on GitHub. Better Blocker is also free and open-source. Its source code is available on GitLab. Adblock Pro is not open-source, but it does not require account registration and it does not require any special permissions. Their privacy policy states that the "App does not collect any personal information" and that the "App uses Apple's native Content Blocking API - it only supplies blocking rules to Safari, without having any access to your browsing data." You can use each of these apps on their own or all together to maximize the number of trackers blocked. Better Blocker is the simplest to use, but Adblock Pro provides the most customization. May 1, 2020•711 words Software programs, like other creative works, are released to its users under certain terms and conditions called licenses. When a license gives its users the rights/freedoms to use, study, copy, modify, improve, and redistribute it, then the software is considered free, or libre, and open-source software (FOSS). Background: In software development, companies and developers write software as a collection of many files called the source code or the code base. When the software is ready for use, they compile the source code into executable files. For example, applications on Windows and macOS typically have the file extensions .exe and .app, respectively. These executable files are usually unreadable and recovering the source from them is usually impossible. If the developers keep their source code private, then the software is said to be proprietary or closed-source. If the developers publish the source code for the public to study it, but do not grant them the all freedoms of open-source software, then software is called source-available. In conventional software development, companies release proprietary software and they require you to purchase a license or subscription in order to use it. This sometimes works well for consumers, but there are important restrictions to be aware of when using proprietary software. If a software program prevents you from exporting your data and using it in another compatible program, then you are forced to maintain a subscription for it in order to maintain access to your work. This tactic, known in economics as vender lock-in or consumer lock-in, is a way for technology companies to make it difficult for you to stop using their services. Free and open-source software avoids locking-in consumers and instead provides them with several valuable rights: Users of free and open-source software are permitted to use it for any purpose (except for those prohibited by law). Users and third-parties can independently study and inspect FOSS programs to verify the authenticity of claims regarding its privacy and security. By making the software transparent, it has the potential to be safer and more trustworthy. After obtaining copies of the source code, users can modify it to fit their needs. These modifications may include improvements on the original code or removals of existing features (e.g., those that invade privacy, create security vulnerabilities, or are simply unnecessary). Users of FOSS can choose to redistribute their software, modified or not, to other people without fee or for profit. The right to redistribution allows users to share their modifications and improvements with others. Some FOSS licenses require that any redistribution of the software must also be licensed with the same license as the original software or at least be licensed in a way that does not revoke any of the rights granted by the original license. These licenses are known as copyleft licenses and are meant to guarantee that any modifications of FOSS remain part of the community as FOSS. Example: The strongest copyleft license for FOSS is considered to be GNU Affero General Public License Version 3.0, or AGPLv3, because it requires that anyone who uses the software to provide a service over a network must also provide its complete source code, even if it's modified. Standard Notes publishes the source code for its web, desktop, and mobile apps as well as its syncing server and extensions under AGPLv3. This means that any individual or company can legally use all our free and open-source software for their own commercial purposes and therefore potentially drive us out of business. However, the AGPLv3 license requires that they must also release their software under AGPLv3, so any improvements that they make to it ultimately return to the Standard Notes community. This means that if Standard Notes were to disappear for whatever reason, then the community would be able to maintain the service and your notes would continue to be safe. The Standard Notes Privacy Manifesto and Longevity Statement Full text of the Affero General Public License Version 3.0 Wikipedia entry on Free and Open Source Software Philosophy of the GNU Project What are LaTeX, TeX, and KaTeX? April 27, 2020•1,984 words What is LaTeX? LaTeX is the standard document preparation system for producing high-quality publications in academia and technical industries. It is often used for large and important academic works such as theses, dissertations, and peer-reviewed journal articles and books, but it can be used for anything, from resumes to homework and lecture notes. For example, the security white papers for Signal and ProtonMail are written in LaTeX by security professionals. How does LaTeX work? The main idea behind LaTeX is that you can focus on your content because you do not have to think about how it is formatted as you are writing. For example, if you were writing a paper with multiple sections and subsections, you could define where they start and what they are called by typing \section{Section Title} and \subsection{Subsection Title}, respectively. Then, when you are ready to see your stylized document, you press a Render or Print button in your LaTeX editor and it will produce a separate .pdf file for reading, sharing, or publishing. In the .pdf file, you will see that the sections are automatically enumerated and styled according to the default styling or any customized styling that you defined. When you work with LaTeX, your work is saved into plaintext files with the extension .tex. They are just like .txt and .md files, except your computer operating system can easily associate them with your LaTeX editor. The two main types of files involved with LaTeX -- plaintext and .pdf -- are among two of the most easily readable by humans without any sophisticated software, so they are great for storing information for a long time. The way of editing and overall workflow offered by LaTeX is different from mainstream typesetting programs such as Microsoft Word, Open Office, and Google Docs. These programs are described as What You See is What You Get (WYSIWYG) because you style your content as you work, and the formatting that you see on the screen while you type is what you expect to see when you print it out or convert it to a .pdf. These editors usually save your work in special extensions such as .docx, .odt, or .rtf in order to preserve the formatting. These files require special software to read, so they force you to remain an active user or subscriber of that software in order for you to maintain access to your work. When you work with LaTeX, you write in plain symbols and produce a .pdf of stylized symbols, but with mainstream software, you write in stylized symbols and produce a .pdf that you expect to look exactly the same. What is Tex? LaTeX is used by academics in many languages and in almost every field because it incorporates TeX, a sophisticated digital typesetting system. TeX was initially released by computer scientist Donald Knuth in 1978, six years before LaTeX was first released, and has long been the standard typesetting for academic publishing in technical fields. In the documents produced with LaTeX, the shapes of the letters and symbols and the way they are spaced apart are collectively known as TeX. There are plenty of extensions to TeX, so it can be used to write about any subject, but it is probably noticed the most in mathematics, computer science, and science because it was originally designed to typeset complex mathematical formulas. In a LaTeX program, you can type math by enclosing it in dollar signs. For example, if you type $$\frac{\sqrt{2}}{2}$$, you would get a fraction of √2 over 2. Try it for yourself on the free online Upmath editor. Some LaTeX programs allow you to use one pair of $ signs instead of two, or to use one for in-line math and two for large blocks of math. What is KaTeX? KaTeX is a fast, self-contained JavaScript library supported by Khan Academy that makes it easy to render TeX on mobile, desktop, and web applications without the full LaTeX infrastructure. It has much of the same functionality as LaTeX but does not have all the same features and add-ons. KaTeX is built for situations where sharing snippets of TeX is enough and sending entire .tex or .pdf files is excessive or inconvenient. For example, KaTeX is built into Facebook Messenger and Rocket.Chat for you to send bits of math in your chat messages. A full table of TeX symbols supported by KaTeX is available here. Since KaTeX is made for the web, it is commonly used in conjunction with Markdown, another way to indicate basic formatting while you are writing. As with LaTeX, Markdown is not WYSIWYG, and contents are stored in plaintext .md or .txt files. To make bold text in a WYSIWYG editor, you highlight the text and click a button. In LaTeX, you type \textbf{bold text} and in Markdown, type bold text. Keyboard shortcuts are generally available for each form of editing. Why Learn TeX? Many undergraduates and graduate students in STEM fields are required to learn how to use LaTeX because it is expected in graduate and professional schools, but having TeX more accessible over the web via KaTeX and having students learn TeX as early as high school will make mathematics a bigger part of our internet language, and that would have many positive consequences for each of us. Our understanding of abstract concepts depends on our ability to describe them with language. For instance, we feel a wider range of complex moods and emotions when we have a language to distinguish between individual feelings and to identify how much we experience them. So, when we talk with other people and write to ourselves about how we feel, we can better describe what we feel, how often we feel that way, and for what reasons. Similarly, if we incorporate KaTeX into the online platforms that we use to communicate, we can integrate mathematics into our written language and thereby improve our understanding of mathematical objects. Ideas that were once strange and obscure will feel more natural and human as we talk more about them. Math will then seem less separate from ourselves and less of a formal activity done only in classrooms for grades. This may also help reduce mathematical anxiety, a common phenomenon among students today. Adding support for mathematics to our online infrastructure is especially important today. Students are becoming increasingly reliant on receiving their education over the internet, and a lack of familiarity with TeX forces them to scan documents, take pictures, or to draw their math. These methods do not work well for collaboration or require expensive, specialized hardware, so are inefficient or unusable by students and schools with limited budgets. Furthermore, by including mathematics in our daily language, we can reduce the misconceptions that mathematics is entirely, or even mostly, a computational activity. It is true that a large and important part about studying mathematics is learning how to solve practical problems with calculations, but middle and high school students are too often taught that mathematics is all about these calculations. If they were to sit-in on an upper-level college math course, however, it is likely that they would not see any numbers at all. Instead, they would see complete sentences and stylized letters. The field that we call Mathematics is as much about using logical arguments to prove that our computations actually work for what we use them for as it is about actually using them. Mathematics is a creative use of language and symbols to create representations of abstract concepts that we believe or take for granted to exist in the world. Most high schools and middle schools do not show students these aspects of mathematics. By making TeX more accessible to all, we can show our students that mathematics is a form of writing -- a form of expression -- that is more interesting and has more intellectual value than solving repetitive calculations. If students were to have mathematics incorporated into their online language and have a better understanding of what higher level mathematics is really about, then they will likely learn it more quickly, with less effort, and with more enthusiasm. These benefits will probably extend to other disciplines since mathematical literacy is an important foundation for doing serious work in almost any technical field. For example, students who want to engage with the recent progress in mathematical machine learning and artificial intelligence would benefit from knowing how to communicate math with the online community. This involves learning how to write math in complete sentences with TeX. How to use TeX The number of things that one can do with TeX is limitless and everyone's situation is different, so there is no single best way to use it. LaTeX is free to install and use on every major desktop operating system, but the installation process can be challenging for many users. On Windows, users need to decide on a free TeX distribution and free editor to work as the front end for the distribution. For security purposes, it is advised that users verify the hashes and signatures of the distributions and editors that they download to ensure their authenticity. If you want to avoid installing LaTeX on your desktop or do not have Administrator privileges to do so, there are online LaTeX editors that are free to use, but you may want to be wary of privacy concerns or need to use additional tools to save and sync your TeX between your devices. If you do not need the full document preparation system provided by LaTeX or prefer to get started with a simpler approach, we recommend using our Standard Notes app. It syncs your notes between all your devices, including mobile devices, with end-to-end encryption. The installation process is simple and straightforward. Our Markdown Math editor is fully equipped with KaTeX, and you can use it to import and export individual notes in your preferred extension for plaintext files (e.g., .txt, .md, and .tex) to share your math with others. You can also work with our Code Editor to type in LaTeX and use the Action Bar to export your .tex files to render with your dedicated TeX editor. The Standard Notes approach is great if you want to take notes or complete assignments in Markdown and KaTeX. With our full range of editors, themes, and backup options, you can also use the app for your other academic and personal notes. We offer students a 30% discount on the one-year and five-year plans of our Extended subscription and we offer free refunds for up to 30 days if you change your mind and want to use a full LaTeX service. Click here to learn more. If you are only interested in learning TeX on desktop, you can use the free Visual Studio Code text editor. You can use VSCode to easily open and save .md files, including ones exported from Standard Notes, and view the TeX with extensions like Markdown All-in-one and Markdown+Math. These extensions also use KaTeX. If you only want to type TeX to communicate via email, you can use the Tex for Gmail Google Chrome extension. In any case, regardless of how you use TeX, it has the possibility to improve your relationship with math and the way you produce documents. Beyond academics, you can use it to design professional resumes, publish technical specifications, and write papers to include in your personal or business portfolios. There are plenty of open source templates that you can use or start from scratch to create a style that best suits your use. A list of symbols supported by KaTeX Dextify - A free website where you can draw symbols and get their TeX syntax The LaTeX Project official website The Overleaf guide to learning how to use LaTeX Encrypted, Ephemeral Customer Service April 1, 2020•959 words The Silver Lining in Facebook's Privacy Nightmare Privacy advocates and journalists have known for years that the tech behemoth Facebook, Inc. threatens our privacy. The company owns three of the most popular social media platforms – Facebook.com, Instagram, and Whatsapp. Each of them are free to use, but Facebook, Inc. posted $55 billion in advertising revenue in 2018. Their advertising revenue was 98.5% of their total revenue for that year and the percentage is expected to increase to 99% in 2020. In order to make so much money from selling advertisements, Facebook tracks its billions of users both on and off Facebook.com with artificial intelligence to determine "how they will behave, what they will buy, and what they will think." Then, Facebook uses these predictions to serve the users advertisements. Even though they claim that they do not sell your data, they used it to generate $22.1 billion of profit in 2018 alone. Despite all of these concerns, there is one feature that Facebook has offered since June 2015 to improve its users' privacy and security: support for Pretty Good Privacy encrypted email communications. This security feature is one that the vast majority of companies still lack even if they make millions in profits. What is Pretty Good Privacy Encrypted Email? Pretty Good Privacy, or PGP, is an end-to-end encryption system that ensures that emails can't be read or tampered with by third-parties while they're in transit from the senders to the recipients. For Facebook, this means third-parties won't be able to see your password reset information and any notifications for comments, posts, and chats that you're following. Facebook knows about almost everything you do on its platforms, but the encryption prevents anyone who intercepts your email from knowing as well. Companies usually do not offer encrypted support for customer service and notifications because it is costly and difficult to implement, and many users don't know how to use it anyway, so it's simply not a priority. Encrypted, Ephemeral Customer Service at Standard Notes All of the Standard Notes apps, servers, and extensions are free and open source because our mission is to protect your privacy and security. As part of that mission, we're proud to offer you end-to-end encrypted, ephemeral customer service at [email protected]. To use this service, you'll need to sign up for a free account at ProtonMail.com. You can also use the browser extension Mailvelope with your current email provider to send us end-to-end encrypted emails, but you'll need ProtonMail to make them ephemeral. Private Customer Service Improves the User Experience We know that customer service is an essential part of the user experience and that privacy is an important component of customer service. We understand that, as users of products and services ourselves, we sometimes struggle to master all of the features that it offers, no matter how simple the product is and how technically adept we are. Sometimes we want to be able to find the answers ourselves, so we write help files and documentation for you to read at your own pace. But we're also aware that, as users, we want it to be okay to admit when we're struggling, when we're frustrated, or when we've made a mistake. We make it more than okay by making it easy to ensure that third-parties don't know about your requests for help and by making it easier to forget that they ever happened. With end-to-end encrypted, ephemeral emails, you can be honest with us without worrying about leaving a permanent mark on the internet. The Right to be Forgotten It's obvious that companies should ensure that their users' passwords are stored properly and that their payment information isn't stolen. It's clear that people should be able browse the internet without being tracked and profiled. It's also widely understood that if we choose to leave a service, we should be able to delete all of our account data. We have the right to control our data, and that should mean that we have the right to prevent third-parties from reading our support inquiries and the right to delete our old emails with customer support. Using a product or service can require emotional energy, especially if we're using it to safeguard our most private, intimate thoughts. All of us feel anxious sometimes, especially when we think about the worst case scenarios. Even though asking for help doesn't need to be embarrassing or stressful, we know from personal experience that it sometimes can be. We hope that encrypted, ephemeral support can give users a greater sense of confidence by knowing that nobody else will see that they want or need help. We know that, as users, we want to be able to ask for help then move on to enjoy what's next without having to worry or look back at all, for any reason. Standard Notes is a Safe Place At the heart of our service is a desire to give you a place where you can be yourself and express your thoughts. We don't profit from selling your personal data because we don't have it and we don't want it. By design, all of your notes are end-to-end encrypted between your devices, so we can't read any of them. Knowledge of your personal habits, private thoughts, or intimate to-do's would be a liability to us and a direct contradiction to what we stand for. We offer you encrypted, ephemeral customer service so that Standard Notes can be an even safer place for your notes, thoughts, and life's work. Which services we use for our daily operations? How to enable PGP support with Facebook How to send expiring/ephemeral emails with ProtonMail What is DNS-over-HTTPS? March 10, 2020•645 words In February 2020, the Mozilla Foundation announced that it would enable DNS-over-HTTPS by default for all Firefox users in the United States. In this post, we'll explain what that is and why it matters. Background: You and your computer need to take many steps in order to connect to a website. At some steps, there's a possibility for your privacy or security to be vulnerable. When you use a web browser such as Firefox to connect to a website, you are viewing files on a remote computer. These computers are usually set up to serve the website files and are also known as web servers. These servers are usually assigned a series of numbers and letters known as IP addresses. You can think of these IP addresses like phone numbers for computers. In order for Firefox to know which website to connect to, you usually need to tell it by clicking on a link or by typing the domain name of the website at the top of the browser. If the website is properly set up, then the domain will correspond to an IP address. When you connect to the domain in your browser, the domain automatically sends you to its corresponding IP address, which then sends you to its corresponding web server. Once you've connected to a web server with your browser, you can send and receive files to and from the web server. These files are collectively known as your traffic, or web traffic. For example, when you click on app.standardnotes.org or type it into your browser, you will automatically be sent to the IP address 34.228.118.242, where you can access the Standard Notes web app. If you connect to app.standardnotes.org over https, as in https://app.standardnotes.org, then your traffic to and from your web browser and the web server will be encrypted. Nobody will be able to read or tamper with your files while they're in transit. However, your connection to app.standardnotes.org and other websites will be known to your internet service providers and anyone else who is watching your network. They won't know what you're writing in your notes app, but they'll know that you're using it. DNS over HTTPS is the technology that encrypts the domain names and IP addresses that you're connecting to in a similar way that https encrypts your web traffic. Why it matters: With DNS over HTTPS, your internet service provider and anyone else listening to your internet connections won't be able to know where you're connecting to anymore. If you use DNS over HTTPS with the Standard Notes web app, then you can be private about being private. Standard Notes forces https on all its connections, but if you want to encrypt all your web traffic, you can use the browser extension HTTPS Everywhere by the Electronic Frontier Foundation. In Firefox, visit Options > General > Network Settings and click "Enable DNS over HTTPS". You can also search "DNS" in the "Find in Options" bar or visit the official tutorial by Mozilla. For other browsers, DNS over HTTPS can be enabled using the flags feature. First, update your browser to the latest version. If you use Microsoft Edge, you may need to install the new Chromium version. Then, depending on your browser, enter the following into the navigation bar and click enable: Google Chrome: chrome://flags/#dns-over-https Microsoft Edge: edge://flags/#dns-over-https Opera: opera://flags/opera-doh Vivaldi: vivaldi://flags/#dns-over-https Brave: brave://flags/#dns-over-https You can also enable DNS-over-HTTPS on your mobile phone by using Cloudflare's 1.1.1.1 app. "The Facts About Mozilla's DNS over HTTPS" by Mozilla "Introducing Warp: Fixing Mobile Internet Performance and Security" by Cloudflare Wikipedia entry on DNS over HTTPS What is Electron? January 27, 2020•557 words Electron is an open source software framework that software developers can use to create desktop apps that work across Windows, macOS, and Linux operating systems. Background: Each operating system can only run apps written in certain programming languages, called native languages. If a developer wants an app to work on the system's desktop, then they will need to write it in those languages. If an app is written in a system's native language, then it is called a native app. For example, native apps for iOS and macOS are written in a language called Swift. Developing a sophisticated app for one platform takes a tremendous amount of expertise, time, money, and effort. If a developer wants the app to work across multiple platforms, they will need to rewrite it in multiple languages. This requires them to either understand the intricacies of each operating system and their corresponding languages or to hire other developers who do. Both options are too expensive or difficult for most startups and individual developers. Additionally, writing an app in multiple languages results in multiple codebases, each of which requires resources to continue to maintain, debug, and improve. How it works: The three universal languages for web browsers are JavaScript, HTML, and CSS. Developers first write their app in these languages then use Electron to package it with technologies called Chromium and Node.js. Chromium is an engine that powers many web browsers including Opera, Google Chrome and Microsoft Edge. Node.js is a system that allows apps written in JavaScript to interact with the operating system. Both work across platforms. Apps built on Electron are in effect specially designed web browsers that work like native apps. Developers can start with building their app for just a single platform, like the web, then produce apps for all other platforms, like Windows and macOS, without expending additional resources on software development. Why it matters: Electron makes it easier to create cross platform apps. Developers can create cross platform apps without learning the intricacies of every operating system and their corresponding programming languages. Developers can use a single codebase for all three desktop apps, which makes it easier and quicker for them to catch and fix bugs. Users can experience lower prices for apps built on Electron because it reduces the costs for software engineers to develop them. A possible downside of apps built on Electron is that they may use more storage and memory (RAM) than if they were built natively. However, storage and memory are becoming cheaper for consumers every year, so even the cheapest new laptops can run apps built on Electron without users noticing the added system requirements. Examples of apps built on Electron: Communications apps including Discord, Riot.im, Rocket.Chat, Signal, Skype, Slack, and Whatsapp Productivity apps including Standard Notes, Ghost, and Wordpress.com Text editors including Atom and Visual Studio Code Password managers including Bitwarden and Keeper The bottom line: Many companies, both large and small, build apps on Electron because it reduces the costs to develop and maintain apps. Without it, many new apps wouldn't exist or work cross-platform. "In Defense of Electron" by Mo Bitar Apps Built on Electron Wikipedia article on Electron Electron Documentation Being a quiet software company A user on our Slack, and some on reddit, have asked us why we've been sort of quiet on progress. Why no new blog posts? Why no new major releases? Why the seemingly dismissive attitude towards feature requests? Here was my response, and here's that new blog post you asked for :) I spent the last few years personally responding to every single user inquiry or request. I also handled every single feature, bug fix, release, blog post, etc. At some point recently, this all began to take a toll on me, and was not sustainable. So I set out to hire a team. Hiring a team has been what I have been working on full time for the last 3-4 months. It really is one of the hardest parts of building a company. Now that the team is close to fully built, we're sorting of establishing a game plan. You don't just get straight to 100% throughput overnight. It takes organizing and orchestration. It takes planning, design, and intensive strategizing before a single line of code is written. If we're quiet, we're working. Software development is hard and arduous. And we still don't have a dedicated blogger or social media person. I run our Twitter, and I'm pretty bad at social media. But having a dedicated content person is just not our immediate priority. In any case, I agree that the only way to know we're actively here and actively working on improving SN is if you hang out in the Slack. Personally, I am a reserved character and have always worked in silence, and don't put too much hype around upcoming features until they're absolutely done and ready for shipping. The software development process is fragile and intricate. We have a somewhat unique approach to feature requests, which at first glance may seem dismissive and anti-user: "We say no to feature requests." This was a necessity early on to manage the never ending influx of feature requests and the finite resources available to develop and maintain them. We don't say no because it means we're never going to build the thing in question—we say no because we're not going to make promises we aren't sure we can deliver on yet. We will not build a feature if we're not absolutely certain it's something we can maintain for the next decade. This is why we don't have an official web clipper, for example. Can we spend a week or two building one? Sure. Can we provide immediate support and priority bug fixes to it when something goes wrong? Depends on our level of resources. But as of today, no. The good news is that this is a full time obsession for many of us. And we're not just sitting here, I promise you that. But we're also not announcing our every move. Perhaps we can move towards being a more extroverted company in the future. As a general note on how we build features, we won't add new features into minor releases. If you ask for X feature and it sounds interesting to us, it's unlikely we'll just immediately bundle it into v3. Instead, how our process has typically worked is that every year or so, we release a new major version of our application. The features, design, and strategy for that release 100% centers around the kind of feedback and requests we've received over the last year. We have a really intimate temperature on where we feel the product needs improvement, and where we think the product excels. So while we can't act on your feedback immediately, it's definitely not forgotten. Most of my responses to feature requests typically take the form of "We'll keep that in mind!"—and that's no cop out. We're literally keeping that in mind. But that's about all we can do in the short term. I've said yes to feature requests prematurely before, and ended up not shipping them for whatever reason, and it backfires, real fast. So, here we are in 2020. We're working on v4. You're going to absolutely love it. But it's going to take a soul-crushing amount of work to complete. I don't dare make an estimate for how long it will take. But when it's done, it will be the best work we've ever put out. End-to-end encryption is a system of encryption that allows parties to communicate in a way that severely limits the potential for third-parties to eavesdrop on or tamper with the messages. Third-parties may include government agencies and companies that provide internet, telecommunications, and online services. End-to-end encryption helps people communicate securely by emails, voice calls, instant messages, and video chats. It also secures communication between devices for sharing and syncing files. End-to-end encryption is most commonly used for digital communications, but it can also be used on paper. The big picture: There are many systems of encryption. End-to-end encryption is considered an improvement upon another system called point-to-point encryption, which is a standard for transmitting credit card data. When parties communicate with each other, their data is usually transmitted through a third-party service provider, which acts as a messenger (e.g., Gmail). Point-to-point encryption encrypts data when it is in transit to and from the messenger, but the messenger can still read the message. End-to-end encryption encrypts the data both before it's given to the messenger, and also during transmission. Different mechanisms may be used to encrypt the data before transmission and during transmission. Transmission encryption is usually layered on top of the existing pre-transmission encryption. End-to-end encryption works by encrypting the data before the third-party receives it and by preventing the third-party from obtaining the decryption keys. The encryption is performed locally on the communicating parties' devices rather than on the third-party's web servers. Analogy: Using end-to-end encrypted communications is like sending a physical letter written in a language that nobody else can read or translate except the intended recipient. Postal service employees can read the to and from addresses and estimate when the letter was sent, but they aren't able to read the letter contents. Why it matters: End-to-end encryption helps ensure the confidentiality and authenticity of communications. It protects users' privacy and allows them to communicate with greater honesty and freedom. Pros and cons: End-to-end encryption protects user privacy by preventing unwarranted or unwanted surveillance by governments and service providers, but it also prevents law enforcement from obtaining communication records when they have justified warrants for doing so. Limitations: End-to-end encryption protects the content of communications, but does not necessarily protect metadata about the communications, such as who contacted whom and at what time. End-to-end encryption protects data when the service providers has a data breach, but it does not always protect data when a user's device, account, or password is stolen because they can be used to obtain decryption keys. Service providers that claim to provide end-to-end encrypted services may nonetheless introduce secret methods of bypassing the encryption. These methods are known as backdoors and can be created willingly or unwillingly. Thus, users are still required to place some trust in the service providers. The bottom line: End-to-end encryption is the new standard for service providers aiming to provide the highest levels of consumer data protection because even they are meant to be unable to decrypt their users' data, but it does not replace lower standards, such as point-to-point encryption, which are acceptable for other uses. Examples of applications with end-to-end encryption: Standard Notes for syncing notes ProtonMail for email Signal for instant messaging NextCloud for cloud storage Wikipedia entry on end-to-end encryption Wikipedia entry on point-to-point encryption Encryption is the process of transforming readable text or data, called plaintext, into unreadable code called ciphertext. After the data is transformed, it is said to be encrypted. The reverse transformation process from ciphertext to plaintext is called decryption. Background: There are many methods of encryption. Each method aims to prevent decryption by anyone who doesn't have a specific secret key, such as a password, fingerprint, or physical device. The big picture: Different forms of encryption have been used for thousands of years to secure communications. Modern mathematics and technology allow for widespread use of encryption methods that make it computationally impossible for third parties to decrypt the encrypted data without the secret key. Analogies: Modern encryption allows people to put their data into digital safes that have locks that are physically impossible to pick. Encrypting data is like translating it into a language that only the person with the secret key can understand. This prevents unauthorized people from reading your letters even if they take it out of the envelope. Why it matters: Encryption can be used to protect documents and information where physical security isn't enough or doesn't help. People can use encryption to prevent third parties from eavesdropping on or tampering with their communications. Businesses can use encryption to deliver digital goods to their customers and safeguard important information about their clients, employees, or practices. Governments can use encryption to protect secrets about their intelligence and military operations, issues concerning national security, and data about their citizens. Encryption is for everyone: Individuals use encryption for many of their daily activities. Smartphones, personal computers, and external hard drives are often encrypted by default or by user configuration. Encrypting devices helps prevent thieves from retrieving data from stolen devices. Encryption helps protect debit and credit card information when they are used in-store and online. Devices that use Bluetooth, such as smart watches or garage door openers, use encryption to prevent unauthorized use. People can use encryption to verify the identities of the websites they browse, the software they download, and the documents they receive. Individuals can use encryption to write private notes and send private messages, emails, and calls to their friends and family. Knowledge Base: What is End-to-End Encryption? Wikipedia entry on Encryption Listed Blogging Platform Copyright © 2022 Standard Notes Via Standard Notes
CommonCrawl
Dynamics of the discovery process of protein-protein interactions from low content studies Zichen Wang1,2,3, Neil R. Clark1,2,3 & Avi Ma'ayan1,2,3 BMC Systems Biology volume 9, Article number: 26 (2015) Cite this article Thousands of biological and biomedical investigators study of the functional role of single genes and their protein products in normal physiology and in disease. The findings from these studies are reported in research articles that stimulate new research. It is now established that a complex regulatory networks's is controlling human cellular fate, and this community of researchers are continually unraveling this network topology. Attempts to integrate results from such accumulated knowledge resulted in literature-based protein-protein interaction networks (PPINs) and pathway databases. These databases are widely used by the community to analyze new data collected from emerging genome-wide studies with the assumption that the data within these literature-based databases is the ground truth and contain no biases. While suspicion for research focus biases is growing, a concrete proof for it is still missing. It is difficult to prove because the real PPINs are mostly unknown. Here we analyzed the longitudinal discovery process of literature-based mammalian and yeast PPINs to observe that these networks are discovered non-uniformly. The pattern of discovery is related to a theoretical concept proposed by Kauffman called "expanding the adjacent possible". We introduce a network discovery model which explicitly includes the space of possibilities in the form of a true underlying PPIN. Our model strongly suggests that research focus biases exist in the observed discovery dynamics of these networks. In summary, more care should be placed when using PPIN databases for analysis of newly acquired data, and when considering prior knowledge when designing new experiments. Protein-protein interaction networks (PPINs) are an abstract representation of the body of knowledge about the known physical interactions between proteins within cells of an organism. In these networks, proteins are the nodes and their known physical interactions (PPIs) are the links. Literature-based PPINs and pathway databases are central in computational systems biology since they summarize accumulated knowledge and are reused for various types of analyses. For example, PPINs can be used to predict disease genes and identify disease related pathways or modules [1–5], applied to predict gene/protein function [6, 7] and predict undiscovered PPIs [8]. Commonly, lists of genes and proteins identified experimentally by high content profiling methods use literature curated PPINs and pathway databases for enrichment analyses [9], or such lists are seeded within PPINs to identify functional subnetworks, and this helps to provide global biological context to the identified gene lists [10, 11]. Inclusion of PPINs was shown to improve the quality of inferred co-expression networks and the prioritization of genes that harbor mutations and copy number variations to better correlate these with disease [12–14]. There are several reasons to suspect that literature-based PPINs and pathway databases contain research focus biases. For instance, the uneven availability of tools such as mouse models or quality antibodies enable the study of some genes and proteins over others [15]. However, so far, concrete proof that such discovery bias really exists has not been reported. It is difficult to prove that such bias exists because the real PPINs are mostly unknown. One null model for the discovery of any network is a uniformly even, uncorrelated exploration of all links and nodes without bias. An alternative model can simulate the network discovery process whereby the discovery in one region of the network will predispose the expansion of related discoveries. Such models can be compared to empirical observations. Tria et al. [16] empirically observed that with open data resources, such as online music catalogues and Wikipedia pages, one discovery spurs another. They then quantified their observation with the theoretical concept of "the adjacent possible" proposed by Kaufman [17]. This concept was first proposed in the context of biological evolution and technological evolution [18, 19]. Tria et al. were able to observe counterparts of Heap's law, whereby the number of discoveries made increases sub-linearly, and Zipf's law whereby the rank distribution of the frequencies of the discovered elements follow a power-law [16]. These observations were illuminated with a model based on Polya's urn [20–22] which was able to unify Heap's and Zipf's laws and capture the correlations in the discoveries without explicit reference to the unknown space of possibilities to which the concept of "the adjacent possible" refers. Here we used the PubMed IDs associated with protein-protein interactions (PPIs) as a time-stamp to temporally resolve the discovery dynamics of mammalian and yeast PPINs extracted manually from low-content published studies. We observe the counterparts of Heap's and Zipf's laws in the discovery of these mammalian and yeast PPINs. Furthermore, we identify individual proteins which exhibit accelerated or decelerated discovery process rates. We then propose an original model which is related to Polya's urn. The model features "reinforcement", rich-get-richer type dynamics with "triggering" whereby novel discoveries trigger the possibility for a subset of new discoveries. Our model is the first network discovery model to explicitly incorporate a space of possibilities, which are the basis of Kaufman's "adjacent possible", in the form of an underlying network. Our model captures the observed dynamics of PPIN discovery, and provides strong suggestive evidence that research-focus biases exist within the patterned discovery of the yeast and mammalian PPINs. Construction of the mammalian PPIN 18 different mammalian PPIN datasets and databases were combined (Table 1). To consolidate interactions, mouse identifiers were converted to their human orthologs using Homologene. Interactions without PMIDs and unary interactions were dropped. 134,590 PPIs from publications that reported more than 10 interactions were also excluded from most analyses. Collectively, the mammalian PPIN consists of 50,478 PPIs covering 9384 proteins, extracted from 34,853 publications with a range of discovery time spanning from April 1967 to October 2013. The yeast (Saccharomyces cerevisiae) PPIN was downloaded from iRefWeb 4.1 [23] by including only experimental physical interactions, filtering out unary interactions, and excluding from most analyses 82,391 PPIs from publications associated with more than 10 interactions. The yeast PPIN has 9678 PPIs between 3154 proteins, extracted from 6208 publications with a range of discovery time spanning from June 1946 to November 2011. Table 1 Mammalian PPINs resources Entropy calculation We define the entropy of a sequence of discovery times for PPIs involving a given protein, i with known degree \( {\tilde{k}}_i \) by: $$ S\left({\tilde{k}}_i\right)=-{\displaystyle \sum_{j=1}^{{\tilde{k}}_i}}\frac{f_j}{{\tilde{k}}_i} log\frac{f_j}{{\tilde{k}}_i} $$ Where f i is the number of discovered PPIs involving protein i in the j th interval of time, where the time intervals are defined by taking the time at which protein i was first observed until the final observation in the whole dataset, and dividing into \( {\tilde{k}}_i \) equal-sized bins. This entropy measure was also normalized by dividing by the maximum possible entropy \( \log \left({\tilde{k}}_i\right) \). Random data permutations In order to compare the entropy and interval distributions to a null distribution based on uniform randomization of the data, we destroyed the original data order while preserving the frequency distributions by employing random permutations. The first reshuffling method acts globally in time by randomly reassigning the time index to PPI discoveries. The second reshuffling method is local in that it only randomly reassigns time indices from the first appearance of the protein under consideration. Generation of artificial networks for the network discovery model Underlying networks for the PPI discovery model were generated by five different algorithms which resulted in networks with various global properties. In order to approximate the size of the true underlying mammalian PPIN, we constructed artificial networks with 25,000 nodes and tuned the parameters of the different network construction models to produce networks that have ~650,000 links. These numbers agree with a recent estimate of the size of the human PPIN [24]. For creating these background networks, 1) the Barabási-Albert (BA) scale-free network was created using the Barabási-Albert preferential attachment model [25]; 2) the BA cluster network was created using Holme and Kim algorithm [26], which adds an extra step to the Barabási-Albert preferential attachment model, a probability of 0.995 was used to add a link to a node neighbor, so that the average clustering coefficient is close to the observed for the mammalian LC-PPIN; the 3) duplication-divergence (DD) network was generated using the algorithm by Ispolatov et al. [27] with the link retention probability of 0.6473; the 4) Erdős-Rényi random network was created using the algorithm by Batagelj and Brandes [28] with the probability of link creation of 0.00208. The global properties of the underlying networks are summarized in Table 2. Table 2 Properties of the artificial network models A model of protein-protein interaction network discovery The true underlying PPIN is represented by the graph G(V, E) where the vertices V correspond to the set of all proteins and the edges E correspond to the set of all true PPIs. We examine five different network structures in order to study their effect on network discovery dynamics as described above. For a given PPIN, edges are "discovered" by a random choice. At a given time step, the probability of discovering the true link between vertices i and j is given by, μ ij ∝ μ (\( {\tilde{k}}_i,{\tilde{k}}_j \)), where \( {\tilde{k}}_x \) is the currently known degree of vertex x. The form of the function μ determines the nature of the discovery process in this model, for example, $$ \mathrm{m}\mathrm{u}\left(\mathrm{k}\mathrm{i},,,\mathrm{k}\mathrm{j}\right)\propto Constant $$ corresponds to a uniform unbiased discovery of the network in which all true edges are equally likely to be discovered. A biased PPIN discovery process can be modeled simply by: $$ \mathrm{m}\mathrm{u}\left(\mathrm{k}\mathrm{i},,,\mathrm{k}\mathrm{j}\right)\propto 1+{\tilde{k}}_i+{\tilde{k}}_j $$ In this case there is a process of reinforcement whereby proteins which have many discovered interactions are more likely to be examined for more interactions. Furthermore, we can enhance, what is referred to in Tria et al. [16] as "triggering", whereby a new discovery triggers adjacent possibilities for subsequent discovery, simply by setting, $$ \mathrm{m}\mathrm{u}\left(\mathrm{k}\mathrm{i},,,\mathrm{k}\mathrm{j}\right)\propto {\tilde{k}}_i+{\tilde{k}}_j $$ In this case only links which are connected to at least one previously discovered protein can possibly become discovered. In the unbiased case, at times which are far from saturation we expect that the known degree of each protein will increase linearly at a rate which is proportional to its true degree: $$ {\tilde{k}}_i(t) = \frac{d_i}{2{\displaystyle {\sum}_i}{d}_i}t $$ Where d i is the true of degree i, and the factor of 2 arises because each link is shared by two nodes. In this case we do not expect any significant acceleration of growth for the nodes, i.e., we expect to discover interactions involving any given protein at a roughly constant rate. Community structure analysis The community structure detection algorithm used is based on modularity optimization [29]. The modularity of a partition of community structures measures the density of links inside the communities as compared to links between communities and is defined as [30]: $$ Q = \frac{1}{2m}{\displaystyle \sum_{i,\ j}}\left[{a}_{ij} - \frac{d_i{d}_j}{2m}\ \right]\delta \left({c}_i,{c}_j\right) $$ Where c i is the community to which node i is assigned, \( m = \frac{1}{2}{\displaystyle \sum_{ij}}{a}_{ij} \), and δ-function δ(u, v) is 1 if u = v and 0 otherwise, a ij denote the element of the symmetric adjacency matrix A of the graph G, and d i , d j are the degrees of node i, j, respectively. This unsupervised algorithm involves modularity optimization by local changes to communities and aggregation of communities to build new communities. As a result, the algorithm generates a hierarchy of community structures. In practice, a Python implementation named "python-louvain" of this algorithm was applied. The number of unique mammalian PPIs and proteins discovered each month, as well as the rate of discovery has few modes (Fig. 1a-d). In order to eliminate extrinsic factors, such as the changing pace of scientific discovery, while retaining the intrinsic properties of the PPINs discovery process, we converted the real-time discovery of each PPI to a time-ranked order. The discovery process of unique proteins appears to be sub-linear, which is analogous to Heap's law, which states that the number of unique words increases sub-linearly with the length of text (Fig. 1e-f). Discovery of the mammalian and yeast LC-PPINs over time. Accumulation of discovered proteins (dotted line) and their interactions (solid line) and the discovery rate of interactions and proteins in the mammalian (a, b) and yeast (c, d) literature based PPINs. The accumulation of discovered proteins (red dots) and their interactions (blue dots) are plotted with respect to the ranking index of time for mammalian (e) and yeast (f) PPINs In addition to the global properties of the discovered network, it is also important to examine local dynamical properties, such as the degree of individual proteins as a function of time. We observe that most proteins increase in degree linearly in both mammalian and yeast networks (Fig. 2a-b). Notably, many proteins are growing in their degree super-linearly. This super-linearity corresponds to acceleration in the rate at which publications are reporting interactions involving the protein. Examples of proteins with super- and sub- linear degree growth are shown in Additional file 1: Figure S1. The dynamics of individual proteins in the discovery of mammalian and yeast LC - and combined PPINs. a-b The distribution of growth exponents of the degrees of individual proteins; super-linear growth corresponds to an acceleration in the rate of discovery of PPIs involving the protein in question. c-d The normalized entropy plotted against the mean degree of the actual PPI discovery for the real network and also for reshuffled versions. e-h The distribution of time intervals between PPI discoveries involving each protein for the real PPI discovery process and also randomly reshuffled data in LC-PPINs (e-f) and combined PPINs made from both high-content and low-content studies (g-h) To examine these possibilities we compared the observed distribution of proteins with accelerated or decelerated rates to the distributions observed for random permutations of the same data (Fig. 2c-f). Similar null distributions were also examined by Tria et al. [16] in a completely different context. This analysis shows that there are significantly more proteins that are growing super-linearly than would be expected by random chance. This is indicative of correlations in the discovery process of PPIs – discoveries involving particular proteins tend to arrive in bursts with their corresponding short time intervals. To explore whether the correlated discovery of PPIs is a unique property of the low-content PPINs, we constructed mammalian and yeast PPINs by increasing the threshold for the maximum number of PPIs per publication from 10 to 50, to 100, to 1000 and with no threshold/filter at all. Observing the distribution of the discovery intervals for PPIs, we see that after including the high content studies, the distribution of intervals is similar to the distribution for randomly permuted data (Fig. 2g-h and Additional file 2: Figure S2). Interestingly, the entropy measure still shows difference between randomly shuffled discoveries and networks discovered by low- and high-content methods combined. We believe that this may be an artifact of the sparse data from high content PPIs, or a new type of bias within PPI data collected by high content methods. For example, PPIs from mass-spectrometry proteomics are known to be biased in detecting large, abundant or sticky proteins. In principle, all parts of a PPIN are discoverable and a uniform exploration is theoretically possible. However, in practice, the discovery process appears to be correlated. In order to illuminate the dynamics of PPINs discovery we introduce a simple model. With reference to Kaufman's "expanding the adjacent possible" [17] we explicitly incorporate the space of possibilities in the form of an underlying true network. We begin with a random uniform exploration process, and then by modulating the probability of discovering links based on the already discovered network, we study the effect research focus biases can have on the dynamics of the network discovery process. A schematic representation of this model is shown in Additional file 3: Figure S3. Although, the true PPIN is unknown, we can examine the effect of global network properties within this model. When we examine the distribution of the growth exponent of the degrees of each node in the model, we see that highly accelerating nodes only occur in the biased models, and the effect of including triggering enhances this effect (Fig. 3). These results are for the scale-free (BA) clustered artificial network as the underlying network; for the other artificial network models these results vary (Additional file 4: Figure S4, Additional file 5: Figure S5, Additional file 6: Figure S6, Additional file 7: Figure S7). Three model realizations with the scale-free (BA) clustered underlying artificial PPIN. a Distribution of degree growth exponents; (b) distribution of time intervals between PPI discoveries involving each protein; (c) the normalized entropy of PPI discoveries for each protein averaged over each degree Furthermore, we notice that accelerating nodes only occur in the models where the underlying networks have a power-law degree property (Additional file 8: Figure S8). This illustrates the relevance of the underlying network structure. It seems that the topology of the space of possibilities has an impact on the discovery process. We note that the difference between the biased and unbiased models is not as marked as the real PPI discovery (Additional file 8: Figure S8). However, it is clear that network discovery of the real networks must contain biases. Our ability to mark individual proteins as either accelerating or decelerating in their discovery rates can be used to identify hot and cold discovery regions within the mammalian PPIN. For identifying such regions, we applied a network clustering algorithm to decompose the networks into clusters, and then computed the average discovery rate within each cluster (Fig. 4). As expected, out of 102 clusters identified, several clusters are enriched for rapidly accelerated or decelerated proteins within each cluster. Each cluster with significant enrichment for accelerating or decelerating rates is labeled by its most significant gene ontology enriched term (Fig. 4d). The network contains two notable clusters with decelerating discovery rates: TGF beta signaling (Fig. 4e) and aminoacyl tRNA biosynthesis (Fig. 4f). Relationship between community structure and PPI discovery rates in PPINs. a Connected components; (b) Communities; (c) modularity, which a quantity that measures the strength of community partition compared to random [30]. d Clusters with significant over-representation of proteins with accelerating or decelerating PPI discovery rates. e, f Subnetworks connecting proteins from two representative cold clusters where proteins are connected through their known interactions with other members of the cluster By time-resolving the mammalian and yeast literature-based PPINs we identified a clear pattern in the PPI discovery process. This pattern is consistent with a biased discovery process which exhibits properties of reinforcement, whereby commonly studied proteins are more likely to be further studied in the near future, and with triggering, whereby discoveries spur related discoveries in the PPI network neighborhood. We introduced a model of PPI network discovery which supports the idea that research focus bias is relevant in the discovery process of mammalian and yeast PPIs. The model demonstrates that network discovery can explain the existence of many more proteins whose degree is accelerating compared with the number of such proteins in more random discovery processes. Such trends should be considered when reusing PPI data for interpretation of new results for drawing conclusions about the underlying biology, and for making decisions about the next set of experiments. A recent publication by Schnoes et al. [31] suggested that there exist significant biases in the discovery of gene functional annotations, and this has a significant effect on their interpretation and application to biological investigations, here we extended this observation to the discovery of PPIs. Our model of PPI network discovery also revealed that an underlying network with the scale-free property is also necessary for the appearance of proteins with super-linear degree growth, which supports the hypothesis that the topology of the real PPINs is scale free [25, 32, 33]. Interestingly, the local clustering of the underlying network does not seem to play a role in the emergence of biases during the discovery process. Notably, the observed bias is stronger in mammalian than yeast PPINs in terms of the ratio of proteins with super-linear degree growth. One explanation for this is that the discovered mammalian PPIN is further from saturation compared to yeast, which is supported by the estimated size of human and yeast PPINs [24]. To explore whether the effects of research focus bias introduced in low-content studies can be reduced, we included PPIs from high-throughput studies. We observed the overall reinforcement and triggering effects on the discovery process are mitigated. However, those effects can still be revealed on the discovery of PPIs for many individual proteins (Additional file 2: Figure S2), suggesting the inclusion of high-content studies help to some extent to reduce the research focus bias in LC-PPINs. Recent studies demonstrate that experimental methods that identify many reliable PPIs in a single study show more uniform distribution of PPIs [3, 34]. However, current high cost, requirement for specific skills, and years of concentrated efforts, are still great obstacles toward making such profiling experiments more widely applied and accepted. In principle, the shift toward genome-wide system-level biology is expected to correct and better inform our current understanding of the real PPINs. In addition, the view of binary PPI is limited. It is now well established that most proteins within cells work as a part of macro-molecular complexes, and thus we expect that the in-silico reconstruction of such complexes will become more central, while less emphasis will be placed on the identification and reuse of binary PPIs. Nevertheless, methods that correct for research focus biases can potentially improve the use of such PPIN and pathway databases for their various computational applications. Cordeddu V, Di Schiavi E, Pennacchio LA, Ma'ayan A, Sarkozy A, Fodale V, et al. Mutation of SHOC2 promotes aberrant protein N-myristoylation and causes Noonan-like syndrome with loose anagen hair. Nat Genet. 2009;41(9):1022–6. Article CAS PubMed Central PubMed Google Scholar Lim J, Hao T, Shaw C, Patel AJ, Szabó G, Rual J-F, et al. A protein–protein interaction network for human inherited ataxias and disorders of Purkinje cell degeneration. Cell. 2006;125(4):801–14. Vidal M, Cusick Michael E, Barabási A-L. Interactome networks and human disease. Cell. 2011;144(6):986–98. Barabasi A-L, Gulbahce N, Loscalzo J. Network medicine: a network-based approach to human disease. Nat Rev Genet. 2011;12(1):56–68. Oti M, Snel B, Huynen MA, Brunner HG. Predicting disease genes using protein–protein interactions. J Med Genet. 2006;43(8):691–8. Vazquez A, Flammini A, Maritan A, Vespignani A. Global protein function prediction from protein-protein interaction networks. Nat Biotechnol. 2003;21(6):697–700. Sharan R, Ulitsky I, Shamir R. Network‐based prediction of protein function. Mol Syst Biol. 2007;3(1):88. Yu H, Paccanaro A, Trifonov V, Gerstein M. Predicting interactions in protein networks by completing defective cliques. Bioinformatics. 2006;22(7):823–9. Huang DW, Sherman BT, Lempicki RA. Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res. 2009;37(1):1–13. Berger SI, Posner JM, Ma'ayan A. Genes2Networks: connecting lists of gene symbols using mammalian protein interactions databases. BMC Bioinformatics. 2007;8(1):372. Antonov AV, Dietmann S, Rodchenkov I, Mewes HW. PPI spider: a tool for the interpretation of proteomics data in the context of protein–protein interaction networks. Proteomics. 2009;9(10):2740–9. Neale BM, Kou Y, Liu L, Ma'Ayan A, Samocha KE, Sabo A, et al. Patterns and rates of exonic de novo mutations in autism spectrum disorders. Nature. 2012;485(7397):242–5. Jia P, Zheng S, Long J, Zheng W, Zhao Z. dmGWAS: dense module searching for genome-wide association studies in protein–protein interaction networks. Bioinformatics. 2011;27(1):95–102. Califano A, Butte AJ, Friend S, Ideker T, Schadt E. Leveraging models of cell regulation and GWAS data in integrative network-based association studies. Nat Genet. 2012;44(8):841–7. Edwards AM, Isserlin R, Bader GD, Frye SV, Willson TM, Yu FH. Too many roads not taken. Nature. 2011;470(7333):163–5. Tria F, Loreto V, Servedio VDP, Strogatz SH. The dynamics of correlated novelties. arXiv preprint arXiv:13101953. 2013. Kauffman SA. Investigations: the nature of autonomous agents and the worlds they mutually create. In: Santa Fe Institute. 1996. Johnson S. Where good ideas come from: the natural history of innovation. UK: Penguin; 2010. Wagner A, Rosen W. Spaces of the possible: universal Darwinism and the wall between technological and biological innovation. J R Soc Interface. 2014;11(97):20131190. Johnson NL, Kotz S. Urn models and their application: an approach to modern discrete probability theory. New York: Wiley; 1977. Mahmoud H. Pólya urn models: CRC press. 2008. Pólya G. Sur quelques points de la théorie des probabilités. In: Annales de l'institut Henri Poincaré: 1930. Presses universitaires de France: 117–161. Turner B, Razick S, Turinsky AL, Vlasblom J, Crowdy EK, Cho E, et al. iRefWeb: interactive analysis of consolidated protein interaction data and their supporting evidence. Database. 2010;2010:baq023. Stumpf MP, Thorne T, de Silva E, Stewart R, An HJ, Lappe M, et al. Estimating the size of the human interactome. Proc Natl Acad Sci U S A. 2008;105(19):6959–64. Barabási A-L, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–12. Holme P, Kim BJ. Growing scale-free networks with tunable clustering. Phys Rev E. 2002;65(2):026107. Ispolatov I, Krapivsky PL, Yuryev A. Duplication-divergence model of protein interaction network. Phys Rev E. 2005;71(6):061911. Batagelj V, Brandes U. Efficient generation of large random networks. Phys Rev E. 2005;71(3):036113. Vincent DB, Jean-Loup G, Renaud L, Etienne L. Fast unfolding of communities in large networks. J Stat Mech: Theory Exp. 2008;2008(10):10008. Newman MEJ. Analysis of weighted networks. Phys Rev E. 2004;70(5):056131. Schnoes AM, Ream DC, Thorman AW, Babbitt PC, Friedberg I. Biases in the experimental annotations of protein function and their effect on our understanding of protein function space. PLoS Comput Biol. 2013;9(5):e1003063. Barabasi A-L, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004;5(2):101–13. Han J-DJ, Dupuy D, Bertin N, Cusick ME, Vidal M. Effect of sampling on topology predictions of protein-protein interaction networks. Nat Biotech. 2005;23(7):839–44. Yu H, Tardivo L, Tam S, Weiner E, Gebreab F, Fan C, et al. Next-generation sequencing to generate interactome datasets. Nat Meth. 2011;8(6):478–80. This research was supported by NIH grants: R01GM098316, U54CA189201 and U54HL127624 to AM. Department of Pharmacology and Systems Therapeutics, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place Box 1215, New York, NY, 10029, USA Zichen Wang, Neil R. Clark & Avi Ma'ayan BD2K-LINCS Data Coordination and Integration Center, New York, USA Knowledge Management Center for the Illuminating the Druggable Genome project, New York, USA Zichen Wang Neil R. Clark Avi Ma'ayan Correspondence to Avi Ma'ayan. AM, NRC and ZW designed the study and wrote the manuscript. NRC developed the network discovery model and wrote the equations. ZW performed the computational analyses and generated the figures. All authors read and approved the final manuscript. Zichen Wang and Neil R. Clark contributed equally to this work. The time-dependence of protein degrees for (A, C) hub proteins; (B, E) accelerating proteins; and (C,F) decelerating proteins of mammalian (A-C) and yeast (D-F) LC-PPINs. The degree growth exponents (slopes) are indicated in the legends. The dynamic of network discovery of mammalian and yeast PPINs made from different PPI per publication cutoffs. Normalized entropy of PPI discoveries for each protein averaged over each degree as well as the distribution of the time intervals between PPI discoveries involving each protein are plotted for each network. The numbers of PPIs per publication cutoff used for construction of each network are indicated at the top of each column. Schematic of three realizations of the network discovery model. The same graph serves as the underlying, true PPI network in each case. Nodes in the graph correspond to proteins and edges correspond to PPIs. Edges are "discovered" randomly and the discovery is indicated red. In the unbiased model each edge is equally likely to be discovered. In the model realization with reinforcement the probability of discovering an edge is proportional to the sum of the degrees of the proteins it connects such that edges connecting higher degree proteins are more likely to be discovered as indicated by the weight of the edge line. In the last example, the triggering process in involved, whereby new discoveries open-up the possibility of further discoveries; in this model only edges which are connected to a discovered protein are discoverable while also the reinforcement property is maintained. Three model realizations with a BA graph as underlying the PPIN. (A) Distribution of degree growth exponents; (B) distribution of the time intervals between PPI discoveries involving each protein; (C) normalized entropy of PPI discoveries for each protein averaged over each degree. Three model realizations with a duplication-divergence graph as the underlying PPIN. (A) Distribution of degree growth exponents; (B) Distribution of the time intervals between PPI discoveries involving each protein; (C) normalized entropy of PPI discoveries for each protein averaged over each degree. Three model realizations with a Erdős-Rényi random graph as the underlying PPIN. (A) Distribution of degree growth exponents; (B) Distribution of the time intervals between PPI discoveries involving each protein; (C) normalized entropy of PPI discoveries for each protein averaged over each degree. Three model realizations with a complete graph as the underlying PPIN. (A) Distribution of degree growth exponents; (B) Distribution of the time intervals between PPI discoveries involving each protein; (C) normalized entropy of PPI discoveries for each protein averaged over each degree. Ratios of proteins in actual and model realizations of PPINs with super-linear and sub-linear growth of PPIs. Each model realization was performed three times and standard deviations of the ratios are indicated by the error bars. Wang, Z., Clark, N.R. & Ma'ayan, A. Dynamics of the discovery process of protein-protein interactions from low content studies. BMC Syst Biol 9, 26 (2015). https://doi.org/10.1186/s12918-015-0173-z DOI: https://doi.org/10.1186/s12918-015-0173-z Pathway Database Artificial Network Underlying Network Network Discovery
CommonCrawl
Timothy V. Pyrkov ORCID: orcid.org/0000-0002-1503-03981, Konstantin Avchaciov1, Andrei E. Tarkhov1,2,3, Leonid I. Menshikov1,4, Andrei V. Gudkov ORCID: orcid.org/0000-0003-2548-01545,6 & Peter O. Fedichev1,3 Nature Communications volume 12, Article number: 2765 (2021) Cite this article 1416 Altmetric We investigated the dynamic properties of the organism state fluctuations along individual aging trajectories in a large longitudinal database of CBC measurements from a consumer diagnostics laboratory. To simplify the analysis, we used a log-linear mortality estimate from the CBC variables as a single quantitative measure of the aging process, henceforth referred to as dynamic organism state indicator (DOSI). We observed, that the age-dependent population DOSI distribution broadening could be explained by a progressive loss of physiological resilience measured by the DOSI auto-correlation time. Extrapolation of this trend suggested that DOSI recovery time and variance would simultaneously diverge at a critical point of 120 − 150 years of age corresponding to a complete loss of resilience. The observation was immediately confirmed by the independent analysis of correlation properties of intraday physical activity levels fluctuations collected by wearable devices. We conclude that the criticality resulting in the end of life is an intrinsic biological property of an organism that is independent of stress factors and signifies a fundamental or absolute limit of human lifespan. Aging is manifested as a progressive functional decline leading to exponentially increasing prevalence1,2 and incidence of chronic age-related diseases (e.g., cancers, diabetes, cardiovascular diseases, etc.3,4,5) and disease-specific mortality6. Much of our current understanding of the relationship between aging and changes in physiological variables over an organism's lifespan originates from large cross-sectional studies7,8,9 and led to development of increasingly reliable "biological clocks" or "biological age" estimations reflecting age-related variations in blood markers10, DNA methylation (DNAm) states11,12 or patterns of locomotor activity13,14,15 (see16 for a review of biological age predictors). All-cause mortality in humans17,18 and the incidence of chronic age-related diseases increase exponentially and double every 8 years3. Typically, however, the physiological indices and the derived quantities such as biological age predictions change from the levels observed in the young organism at a much lower pace than it could be expected from the Gompertzian mortality acceleration. Most important factors that are strongly associated with age are also known as the hallmarks of aging19 and may be, at least in principle, modified pharmacologically. In addition to that, by analogy to resilience in ecological systems, the dynamic properties such as physiological resilience measured as the recovery rate from the organism state perturbations20,21 were also associated with mortality22 and thus may serve as an early warning sign of impending health outcomes23,24. Hence, a better quantitative understanding of the intricate relationship between the slow physiological state dynamics, resilience, and the exponential morbidity and mortality acceleration is required to allow the rational design, development, and clinical validation of effective antiaging interventions. We addressed these theoretical and practical issues by a systematic investigation of aging, organism state fluctuations, and gradual loss of resilience in a dataset involving multiple Complete Blood Counts (CBC) measured over short periods of time (a few months) from the same person along the individual aging trajectory. Neutrophil to Lymphocyte Ratio (NLR) and Red cell distribution width have been already suggested and characterized as biomarkers of aging25,26,27,28. Instead of focusing on individual factors, to simplify the matters, we followed29,30 and described the organism state by means of a single variable, henceforth referred to as the dynamic organism state indicator (DOSI) in the form of the log-transformed proportional all-cause mortality model predictor. First, we observed that early in life the DOSI dynamics quantitatively follows the universal ontogenetic growth trajectory from31. Once the growth phase is completed, the indicator demonstrated all the expected biological age properties, such as association with age, multiple morbidity, unhealthy lifestyles, mortality and future incidence of chronic diseases. Late in life, the dynamics of the organism state captured by DOSI along the individual aging trajectories is consistent with that of a stochastic process (random walk) on top of the slow aging drift. The increase in the DOSI variability is approximately linear with age and can be explained by the rise of the organism state recovery time. The latter is thus an independent biomarker of aging and a characteristic of resilience. Our analysis shows that the auto-correlation time of DOSI fluctuations grows (and hence the recovery rate decreases) with age from about 2 weeks to over 8 weeks for cohorts aging from 40 to 90 years. The divergence of the recovery time at advanced ages appeared to be an organism-level phenomenon. This was independently confirmed by the investigation of the variance and the autocorrelation properties of physical activity levels from another longitudinal dataset of intraday step-counts measured by wearable devices. We put forward arguments suggesting that such behavior is typical for complex systems near a bifurcation (disintegration) point and thus the progressive loss of resilience with age may be a dynamic origin of the Gompertz law. Finally, we noted, by extrapolation, that the recovery time would diverge and hence the resilience would be ultimately lost at the critical point at the age in the range of 120–150 years, thus indicating the absolute limit of human lifespan. Quantification of aging and development Complete blood counts (CBC) measurements are most frequently included in standard blood tests and thus comprise a large common subset of physiological indices reported across UKB (471473 subjects, age range 39–73 y.o.) and NHANES datasets (72,925 subjects, age range 1–85 y.o., see Supplementary Table 1 for the description of the data fields). To understand the character of age-related evolution of the organism state we employed a convenient dimensionality-reduction technique, the Principal Component Analysis (PCA). The coordinates of each point in Fig. 1A is obtained by averaging the first three Principal Component scores of PCA-transformed CBC variables in age-matched cohorts in NHANES dataset. The average points follow a well-defined trajectory or a flow in the multivariate configuration space spanned by the physiological variables and clearly correspond to various stages of the organism development and aging. Fig. 1: Quantification of aging and development. A The graphical representation of the PCA for 5–85 year old NHANES participants follows an age-cohort averaged aging trajectory. Centers of each sequential age cohort are plotted in first three PCs. Three approximately linear segments are clearly seen in aging trajectory, corresponding to (I) age < 35; (II) age 35–65; (III) age > 65. B Dynamic organism state indicator (DOSI) mean values (solid line) and variance (shaded area) are plotted relative to age for all participants of NHANES study. The average line demonstrates nearly linear growth after age of 40. In younger ages the dependence of age is different and consistent with the universal curve suggested by the general model for ontogenetic growth31. To illustrate the general character of this early-life dependence we superimposed it with the curve of mean weight in age cohorts of the same population (dotted line). All values are plotted in normalized from as in31. The average DOSI of the "most frail" ("compound morbidity index", CMI > 0.6) individuals is shown with the dashed line. C Distributions of sex- and age-adjusted DOSI in cohorts of NHANES participants in different morbidity categories relative to the DOSI mean in cohorts of "non-frail" (1 or no diagnoses, CMI < 0.1) individuals. Note that the distribution function in the "most frail" group (more than six diagnoses, CMI > 0.6) exhibited the largest shift and a profound deviation from the symmetric form. Qualitatively, we differentiated three distinctive segments of the aging trajectory, corresponding to (I) early adulthood (16–35 y.o.); (II) middle ages (35–65 y.o.); and (III) older ages (older than 65 y.o.). The middle segment in trajectories for women has additional change of direction presumably associated with menopause, but we leave its investigation for future work. Inside each of the segments, the trajectory was approximately linear. This suggests that over long periods of time (age), CBC variations other than noise could be described by the dynamics of a single dynamic variable (degree of freedom) tracking the distance travelled along the aging trajectory and henceforth referred to as the DOSI. Morbidity and mortality rates increase exponentially with age and a log-linear risk predictor model is a good starting point for characterization of the functional state of an organism and quantification of the aging process15,29. Accordingly, we employed Cox proportional hazards model32 and trained it using the death register of the NHANES study using log-transformed CBC measurements and sex variable (but not age) as covariates. Altogether, the training subset comprised participants aged 40 y.o. and older. The mortality risk model yielded a single value of log-hazards ratio for every subject and increased in full age range of NHANES participants (Fig. 1B). As we will see below, it was a useful and dynamic measure of the organism state henceforth identified with DOSI. Early in life the dynamics of the organism state has, of course, nothing to do with the late-life increase of mortality rate (i.e., aging), but is rather associated with ontogenetic growth. Accordingly, we checked that the organism state measured by DOSI follows closely the theoretical trajectory of the body mass adopted from31: $$x(t)=X\left(1-\left[1-\left(\frac{{x}_{0}}{X}\right)^{\frac{1}{4}}\right]{e}^{\frac{-t}{{t}_{0}}}\right)^{4}.$$ Here x is the body mass, or in the linear regime any quantity such as DOSI depending on the body mass, t is the age, t0 is the characteristic time scale associated with the development, and x0 and X are the asymptotic levels of same property at birth and in the fully grown state, respectively. The dots and the dashed lines in Fig. 1B represents age-cohorts averaged body mass trajectory and the best fit of the age-cohort averaged DOSI levels by Eq. (1) for the same NHANES participants. The approximation works remarkably well up until the age of about 40. The characteristic time scale from the fit, t0 = 6.8 years, coincides almost exactly with the best fit value of 6.3 years obtained from the fit of body mass trajectory. As the body size increases, the metabolic output per unit mass slows down and the organism reaches a steady state corresponding to the fully grown organism. The inspection of Fig. 1B shows, however, that the equilibrium solution of the organism growth problem appears to be unstable in the long run and the organism state dynamics measured by DOSI exhibits deviations from the stationary solution beyond the age of ~40 years old. To separate the effects of chronic diseases from disease-free aging, we followed33 and characterized the health status of each study participant based the number of health conditions diagnosed for an individual normalized to the total number of conditions included in the analysis to yield the "compound morbidity index" (CMI) with values ranging from 0 to 1. The list of health conditions common to the NHANES and UKB studies that were used for CMI determination is given in Supplementary Table 2. CMI can be viewed as a convenient proxy to the Frailty Index introduced in34, that is a composite marker, depending on the prevalence of 46 health deficits. Unlike the Frailty Index, CMI requires only the variables that are available simultaneously in NHANES and UKB. In NHANES, among individuals aged 40 and older, the correlation between the Frailty Index and CMI was pretty high (Pearson r = 0.64). Therefore we accept semi-quantitative correspondence between CMI and Frailty Index and categorize UKB and NHANES participants in cohorts of individuals of increasing level of frailty according to CMI. Multiple morbidity manifests itself as elevated DOSI levels. This can be readily seen from the difference between the solid and dashed lines in Fig. 1B, which represent the DOSI means in the cohorts of healthy ("non-frail", CMI < 0.1) and "most frail" (CMI > 0.6) NHANES participants, respectively. In groups stratified by increasing number of health condition diagnoses, the normalized distribution of DOSI values (after adjustment by the respective mean levels in age- and sex-matched cohorts of healthy subjects) exhibited a progressive shift and increased variability (see Fig. 1C and Supplementary Fig. 1b for NHANES and UKB, respectively). For both NHANES and UKB, the largest shift was observed in the "most frail" (CMI > 0.6) population. The increasingly heavy tail at the high end of the DOSI distribution in this group is characteristic of an admixture of a distinct group of individuals occupying the adjacent region in the configuration space corresponding to the largest possible DOSI levels. Therefore, DOSI displacement from zero-mean (after proper adjustments for age and sex) was expected to reflect the fraction of "most frail" individuals in a cohort of any given age. This was confirmed to be true using the NHANES dataset (Fig. 2A; r = 0.83). Fig. 2: The relation between the dynamic organism state indicator (DOSI) and lifestyles, frailty, and health risks. A Fraction of frail persons is strongly correlated with the excess DOSI levels, that is the difference between the DOSI of an individual and its average and the sex- and age-matched cohort in the "non-frail" population in NHANES. B Exponential fit showed that until the age of 70 y.o. the fraction of the "most frail" individuals in the population grows approximately exponentially with age with the doubling rate constants of 0.08 and 0.10 per year in the UKB and the NHANES cohorts, respectively. C Distribution of log-hazards ratio in age- and sex-matched cohorts of NHANES participants who never smoked, smoked previously but quit prior to the time of study participation, or were current smokers at the time of the study. The DOSI level is elevated for current smokers, while it is almost indistinguishable between never-smokers and those who quit smoking (two-sided Mann–Whitney test p > 0.05). Each boxplot shows the center (median) of the distribution, boxplot bounds show the 25 and 75% percentiles and boxplot whiskers show the 5 and 95% percentiles. The fraction of "most frail" subjects still alive increased exponentially at every given age until the age corresponding to the end of healthspan was reached. The characteristic doubling rate constants for the "most frail" population fractions were 0.087 and 0.094 per year in the NHANES and the UKB cohorts, respectively, in comfortable agreement with the accepted Gompertz mortality doubling rate of 0.085 per year35, see Fig. 2B. We note that the prevalence of diseases in the NHANES cohort is consistently higher than that in the UKB population, although the average lifespan is comparable in the two countries. This may be a consequence of the enrollment bias in the UKB: life tables analysis in36 suggests the UKB subjects appear to outlive typical UK residents. Dynamic organism state indicator (DOSI) and health risks In the most healthy subjects, i.e., those with no diagnosed diseases at the time of assessment, the DOSI predicted the future incidence of chronic age-related diseases observed during 10-year follow-up in the UB study (Supplementary Table 2). There was no relevant information available in NHANES. We tested this association using a series of Cox proportional hazard models trained to predict the age at the onset/diagnosis of specific diseases. We observed that the morbidity hazard ratios associated with the DOSI relative to its mean in age- and sex-matched cohorts were statistically significant predictors for at least the most prevalent health conditions (those with more than 3000 occurrences in the UKB population). The effect size (HR ≈ 1.03–1.07) was the same regardless of whether a disease was diagnosed first in a given individual or followed any number of other diseases. Only emphysema and heart failure which are known to be strongly associated with increased neutrophil counts37,38 demonstrated particularly high associations. Therefore, we conclude that the DOSI is a characteristic of overall health status that is universally associated with the risks of developing the most prevalent diseases and, therefore, with the end of healthspan as indicated by the onset of the first morbidity (HR ≈ 1.05 for the "First morbidity" entry in Supplementary Table 2). In the most healthy "non-frail" individuals with life-shortening lifestyles/behaviors, such as smoking, the DOSI was also elevated, indicating a higher level of risks of future diseases and death (Fig. 2C). Notably and in agreement with the dynamic nature of DOSI, the effect of smoking appeared to be reversible: while the age- and sex- adjusted DOSI means were higher in current smokers compared to non-smokers, they were indistinguishable between groups of individuals who never smoked and who quit smoking (c.f.15,39). Physiological state fluctuations and loss of resilience To understand the dynamic properties of the organism state fluctuations in relation to aging and diseases, we used two large longitudinal datasets, jointly referred to and available as GEROLONG, including anonymized information on: (a) CBC measurements from InVitro, the major Russian clinical diagnostics laboratory and (b) physical activity records measured by step counts collected by means of a freely available iPhone application. The CBC slice of the combined dataset included blood test results from 388 male and 694 female subjects aged 30–90 with complete CBC analyses that were sampled 10–20 times within a period of more than 3 years (up to 42 months). There was no medical condition information available for the GEROLONG subjects. Hence, for the CBC measurements we used the mean DOSI level corresponding to the "most frail" NHANES and UKB participants as the cutoff value to select "non-frail" GEROLONG individuals (141 male and 266 female subjects aged 40–90) for subsequent analysis. The difference between the mean DOSI levels in groups of the middle-aged and the eldest available individuals was of the same order as the variation of DOSI across the population at any given age (see Fig. 1B). Accordingly, serial CBC measurements along the individual aging trajectories revealed large stochastic fluctuations of the physiological variables around its mean values, which were considerably different among individual study participants. Naturally, physiological variables at any given moment of time reflect a large number of stochastic factors, such as manifestation of the organism responses to endogenous and external factors (as in Fig. 2C). We therefore focused on the statistical properties of the organism state fluctuations. Auto-correlation function is the single most important statistical property of a stationary stochastic process represented by a time series x(t): $$C({{\Delta }}t)={\langle \delta x(t+{{\Delta }}t)\delta x(t)\rangle }_{t},$$ where Δt is the time lag between the subsequent measurements of x, δx(t) = x(t) − 〈x〉t is the deviation of x from its mean value produced by the averaging 〈x(t)〉t along the individual trajectory(see e.g.,40). The autocorrelation function of x = DOSI averaged over individual trajectories in subsequent age cohorts of GEROLONG dataset was plotted vs. the delay time in Fig. 3A and exhibited exponential decay over a time scale of ~2–8 weeks depending on age. Fig. 3: Physiological state fluctuations and loss of resilience. A The auto-correlation function C(Δt) of the Dynamic organism state indicator (DOSI) fluctuations during several weeks averaged in sequential 10-year age-cohorts of GEROLONG subjects showed gradual age-related remodelling. Experimental data and fit to autocorrelation function are shown with solid and dashed lines, respectively. The DOSI correlations are lost over time Δt between the measurements and, hence, the DOSI deviations from its age norm reach the equilibrium distribution faster in younger individuals. B The auto-correlation function C(Δt) of fluctuations of the negative logaritm of steps-per-day during several weeks averaged in sequential 10-year age-cohorts of GEROLONG Stepcounts subset subjects showed similar gradual age-related remodelling. C The DOSI relaxation rate (or the inverse characteristic recovery time) computed for sequential age-matched cohorts from the GEROLONG dataset decreased approximately linearly with age and could be extrapolated to zero at an age in the range of ~110–170 y.o. (at this point, there is complete loss of resilience and, hence, loss of stability of the organism state). The solid lines and shaded areas show the line of linear regression fit and its 95% confidence interval. D The inverse variance of DOSI decreased linearly in all investigated datasets and its extrapolated value vanished (hence, the variance diverged) at an age in the range of 120–150 y.o. We performed the linear fit for subjects 40 y.o. and older, excluding the "most frail" ("compound morbidity index", CMI > 0.6) individuals. The solid lines and shaded areas show the line of linear regression fit and its 95% confidence interval. The blue dots and lines show the inverse variance of log-scaled measure of total physical activity (the number of steps per day recorded by a wearable accelerometer) for NHANES participants. Phenoage29, calculated using explicit age and additional blood biochemistry parameters also demonstrated age-related decrease of the inverse variance in NHANES population. The exponential character of the autocorrelation function, \(C({{\Delta }}t) \sim \exp (-\varepsilon {{\Delta }}t)\) is a signature of stochastic processes following a simple Langevin equation: $$\delta \dot{x}=-\varepsilon \delta x+f(t),$$ where \(\delta \dot{x}\) stands for the rate of change in fluctuations δx, ε is the relaxation or recovery rate, and f is the "force" responsible for deviation of the organism state from its equilibrium. The auto-correlation function decay time (or simply the auto-correlation time) is inversely proportional to the relaxation (recovery) rate ε and characterizes the time scale involved in the equilibration of a system's state in response to external perturbations. We therefore propose using this quantity as a measure of an organism's "resilience", the capacity of an individual organism to resist and recover from the effects of physiological or pathological stresses41,42). We fitted the DOSI auto-correlation functions averaged over individuals representing subsequent age-matched cohorts to an exponential function of the time delay. We observed that recovery rates obtained from fitting to data in the subsequent age-cohorts decreased approximately linearly with age (Fig. 3C). Extrapolation to older ages suggested that the equilibration rate and hence the resilience is gradually lost over time and is expected to vanish (and hence the recovery time to diverge), at some age of ~120–150 y.o.). The exponential decay of auto-correlation function is not merely a peculiarity of an organism state indicator computed from CBC. We were able to use another set of high resolution longitudinal measurements of daily step counts collected by wearable devices. Step counts measurements were obtained from users of fitness wristband (3032 females, 1783 males of age 20–85 y.o.). The number of measurements for each user was at least 30 days and up to 5 years. In15 we observed, that the variability of physical activity (namely, the logarithm of the average physical activity), that is another hallmark of aging and is associated with age and risks of death or major deceases, also increases with age and hence may be used as an organism state indicator. The autocorrelation function of the physical activity levels shows already familiar exponential profile and signs of the loss of resilience in subsequent age-matched cohorts as shown in Fig. 3B. The recovery rate inferred from as the inverse autocorrelation time from physical activity levels trajectories is plotted alongside the recovery rates from CBC-derived DOSI in Fig. 3C. We observed that the recovery rates revealed by the organism state fluctuations measured in apparently unrelated subsystems of the organism (the blood cell counts and physical activity levels) are highly concordant, both decrease as the function of the chronological age at the same pace, and, if the extrapolation holds, vanish at the same limiting age. Eq. (3) predicts, that the variance of DOSI should also increase with age. Indeed, according to the solution of the Langevin equation with a purely random and uncorrelated force, 〈f(t + Δt)f(t)〉t = Bδ(Δt) (with δ(x) being the Dirac's delta-fucntion and B being the power of the stochastic noise), the fluctuations of x = DOSI should increase with age thus reflecting the dynamics of the recovery rate: σ2 ≡ 〈δx2〉 ~ B/ε. Remarkably, the variability in a DOSI did increase with age in every dataset evaluated in this study. Following our theoretical expectations of the inverse relation between the resilience and the fluctuations, we plotted the inverse variance of the DOSI computed in sex- and age-matched cohorts representing the most healthy subjects (see Fig. 3D). Again, extrapolation suggested that, if the tendency holds at older ages, the population variability would increase indefinitely at an age of ~120–150 y.o. As expected, the amplification of the fluctuations of the organism state variables with age is not limited to CBC features. In Fig. 3D we plotted the inverse variance of this physical activity feature and found that it linearly decreases with age in such a way that the extrapolated variance diverges at the same critical point at the age of ~120–150 y.o. To demonstrate the universality of of the organism state dynamics, we followed the fluctuation properties of the Phenoage, another log-linear mortality predictor trained using the explicit age, sex and a number of biochemical blood markers29. By its nature, PhenoAge is another DOSI produced from a different set of features. Unfortunately, we could not not obtain a sufficient number of individuals with all the relevant markers measurements from the longitudinal dataset from InVitro. Accordingly, we could not compute the corresponding autocorrelation function. We were, however, able to compute PhenoAge for NHANES subjects and observed an increase in variability of the PhenoAge estimate as a function of chronological age and a possible divergence of PhenoAge fluctuations at around the age of 150 y.o. The simultaneous divergence of the organism state recovery times (critical slowing down in Fig. 3C) and the increasing dynamic range of the the organism state fluctuations (critical fluctuations in Fig. 3D) observed independently in two biological signals is characteristic of proximity of a critical point23,40 at some advanced age over 100 y.o. Under these circumstances, the organism state dynamics are stochastic and dominated by the variation of the single dynamic variable (also known as the order parameter) associated with criticality23,43. A proper identification of such a feature requires massive high-quality longitudinal measurements and sophisticated approaches auto-regressive models. In a similar study involving CBC variables of aging mice, we were able to obtain an accurate predictor associated with the age, risks of death (and the remaining lifespan), and frailty44. In this work we turned the reasoning around and choose to quantify the organism state by the log-linear proportional hazards estimate of the mortality rate followed15,29,45, using CBC and physical activity variables. This inherently dynamic quantitative organism state indicator (DOSI) increased with age, predicted the prospective incidence of age-related diseases and death, and was elevated in cohorts representing typical life-shortening lifestyles, such as smoking, or exhibiting multiple morbidity. The log-linear risks model predictor demonstrated a non-trivial dependence on age also early in life, that is in the age range with almost no recorded mortality events in the training dataset. The age-cohort averaged DOSI increased and then reached a plateau (Fig. 1B) in good quantitatively consistent with the predictions of the universal theory of ontogenetic growth31. The agreement between the theory and the DOSI dependence on age is very good, and hence we are led to believe that the features of the "aging trajectory" in Fig. 1A are not coincidental artifacts of data analysis. According to the theory, the development of any organism is the result of a competition between the production of new tissue and life-sustaining activities. The total amount of the energy available scales as the fractional power of the body mass m3/4 according to the universal allometric Kleiber–West law46,47. On the one hand, the energy requirements for the organism maintenance increase linearly as the body mass grows and hence the initial excess metabolic power drives the growth of the organism until it reaches the dynamic equilibrium corresponding to the mature animal state. As we can see in Fig. 1B, the mature human organism is dynamically unstable in the long run and deviations from the ontogenetic growth theory predictions pick up slowly well after the organism is fully formed. The organism state dynamics measured by DOSI over lifetime qualitatively reveals at least three regimes reflecting growth, maturation, and aging, respectively. The apparent life-stages correspond well to the results of multivariate PCA of CBC variance (Fig. 1A) in this work and also that of physical activity acceleration/deceleration patterns from15. Every arm of the aging trajectory is characterized by a specific set of features strongly associated with age in the signal. Schematically, the reported features of the longitudinal organism state dynamics can be summarized with the help of the following qualitative picture (Fig. 4). Far from the critical point (at younger ages), the organism state perturbations can be thought of as confined to the vicinity of a possible stable equilibrium state in a potential energy basin (A). Initially, the dynamic stability is provided by a sufficiently high potential energy barrier (B) separating this stability basin from the inevitably present dynamically unstable regions (C) in the space of physiological parameters. While instability basin, an organism state experiences stochastic deviation from the metastable equilibrium state, which is gradually displaced (see the dashed line D) in the course of aging even for the successfully aging individuals. Fig. 4: Schematic representation of loss of resilience along aging trajectories. Representative aging trajectories are superimposed over the potential energy landscape (vertical axis) representing regulatory constraints. The stability basin (A) is separated from the unstable region (C) by the potential energy barrier (B). Aging leads to a gradual decrease in the activation energy and barrier curvature and an exponential increase in the probability of barrier crossing. The stochastic activation into a dynamically unstable (frail) state is associated with acquisition of multiple morbidities and certain death of an organism. The white dotted line (D) represents the trajectory of the attraction basin minimum. Examples 1 (black solid line) and 2 (black dashed line) represent individual life-long stochastic DOSI trajectories that differ with respect to the age of first chronic disease diagnosis. The characteristic organism state auto-correlation time demonstrated here (3–6 weeks, see Fig. 3A) is much shorter than lifespan. The dramatic separation of time scales makes it very unlikely that the linear decline of the recovery force measured by the recovery rate in Fig. 3C can be explained by the dynamics of the organism state captured by the DOSI variation alone. Therefore, we conclude that the progressive remodeling of the attraction basin geometry reflects adjustment of the DOSI fluctuations to the slow independent process that is aging itself. In this view, the aging drift of the DOSI mean in cohorts of healthy individuals (as in Fig. 1B) is the adaptive organism-level response reflecting, on average, the increasing stress produced by the aging process. The longitudinal analysis in this work demonstrated that the organism state measured by DOSI follows a stochastic trajectory driven mainly by the organism responses to unpredictable stress factors. Over lifetime, DOSI increases slowsly, on average. The dynamic range of the organism state fluctuations is proportional to the power of noise and is inversely proportional to the recovery rate of the DOSI fluctuations. Therefore, the organism state of healthy individuals at any given age is described by the mean DOSI level, the DOSI variability and its auto-correlation time. Together, the three quantities comprise the minimum set of biomarkers of stress and aging in humans and could be determined and altered, in principle, by different biological mechanisms and therapeutic modalities. The DOSI recovery rate characterizes fluctuations of DOSI on time scales from few weeks to few months, decreases with age and thus indicates the progressive loss of physiological resilience. Such age-related remodeling of recovery rates has been previously observed in studies of various physiological and functional parameters in humans and other mammals. For example, in humans, a gradual increase in recovery time required after macular surgery was reported in sequential 10-year age cohorts48 and age was shown to be a significant factor for twelve months recovery and the duration of hospitalization after hip fracture surgery49,50, coronary artery bypass51, acute lateral ankle ligament sprain52. A mouse model suggested that the rate of healing of skin wounds also can be a predictor of longevity53. The resilience can only be measured directly from high-quality longitudinal physiological data. The Framingham Heart Study7, Dunedin Multidisciplinary Health and Development Study54 and other efforts produced a growing number of reports involving statistical analysis of repeated measurements from the same persons, see, e.g.,55,56. Most of the time, however, the subsequent samples are years apart and hence time between the measurements greatly exceeds the organism state autocorrelation time reported here. This is why, to the best of our understanding, the relation of the organism state recovery rate and mortality has remained largely elusive. In the presence of stresses, the loss of resilience should lead to destabilization of the organism state. Indeed, in a reasonably smooth potential energy landscape forming the basin of attraction, the activation energy required for crossing the protective barrier (B) decreases along with the curvature at the same pace, that is, linearly with age. Whenever the protective barrier is crossed, dynamic stability is lost (see example trajectories 1 and 2 in Fig. 4, which differ by the age of crossing) and deviations in the physiological parameters develop beyond control, leading to multiple morbidities, and, eventually, death. On a population level, activation into such a frail state is driven by stochastic forces and occurs approximately at the age corresponding to the end of healthspan, understood as "disease-free survival". Since the probability of barrier crossing is an exponential function of the required activation energy (i.e., the barrier height)40, the weak coupling between DOSI fluctuations and aging is then the dynamic origin of exponential mortality acceleration known as the Gompertz law. Since the remaining lifespan of an individual in the frail state is short, the proportion of frail subjects at any given age is proportional to the barrier crossing rate, which is an exponential function of age (see Fig. 2B). The end of healthspan can therefore be viewed as a form of a nucleation transition40, corresponding in our case to the spontaneous formation of states of chronic diseases out of the metastable phase (healthy organisms). The DOSI is then the order parameter associated with the organism-level stress responses at younger ages and plays the role of the "reaction coordinate" of the transition to the frail state later in life. All chronic diseases and death in our model originate from the dynamic instability associated with single protective barrier crossings. This is, of course, a simplification and yet the assumption could naturally explain why mortality and the incidence of major age-related diseases increase exponentially with age at approximately the same rate3. The reduction of slow organism state dynamics to that of a single variable is typical for the proximity of a tipping or critical point23. DOSI is therefore the property of the organism as a whole, rather than a characteristics of any specific functional subsystem or organism compartment. We did observe a neat concordance between the decrease in the organism state recovery rates (Fig. 3C) and DOSI variance divergence (Fig. 3D) from seemingly unrelated sources such as blood markers and the physical activity variables. This is likely a manifestation of common dynamic origin of a substantial part of fluctuations in diverse biological signals ranging from blood markers (CBC and PhenoAge covariates) to physical activity levels. We therefore predict that similar divergence of variance and increase in auto-correlation times will be found in future studies involving other risk-associated markers, including DNAm clocks. According to the presented model, early in life the dynamics of DOSI is described by a simple Langevin Eq. (3). External stresses (such as smoking) or diseases produce perturbations that modify the shape of the effective potential leading to the shift of the equilibrium DOSI position. For example, the mean DOSI values in cohorts of individuals who never smoked or who quit smoking are indistinguishable from each other, yet significantly different from (lower than) the mean DOSI in the cohort of smokers (Fig. 2C). Thus, the effect of the external stress factor is reflected by a change in the DOSI and is reversed as soon as the factor is removed. These findings agree with earlier observations suggesting that the effects of smoking on remaining lifespan and on the risks of developing diseases are mostly reversible once smoking is ceased well before the onset of chronic diseases15,39. The decline in the lung cancer risk after smoking ablation57 is slower than the recovery rate reported here. This may be the evidence suggesting that long-time stresses may cause hard-to repair damage to the specific tissues and thus produce lasting effects on the resilience. In the absence of chronic diseases when the organism state is dynamically stable, the elevation of physiological variables associated with the DOSI indicates reversible activation of the most generic protective stress responses. Moderately elevated DOSI levels are therefore protective responses that can measured by molecular markers (e.g., C-reactive protein) and affects general physical and mental health status45. We also predict, that death is preempted by the activation into a state with excess DOSI and loss of resilience. The excessive DOSI levels observed in older individuals can be thought of as an aberrant activation of stress-responses beyond the dynamic stability range. Thus elevated levels and long auto-correlation times of DOSI fluctuations are therefore characteristics of chronic diseases and predict death. We propose that therapies targeting frailty-associated phenotypes (e.g., inflammation) would, therefore, produce distinctly different effects in disease-free vs. frail populations. In healthy subjects, who reside in the region of the stability basin (B) (see Fig. 4), a treatment-induced reduction of DOSI would quickly saturate over the characteristic auto-correlation time and lead to a moderate decrease in long-term risk of morbidity and death without a change in resilience. Technically, this would translate into an increase in healthspan, although the reduction of health risks would be transient and disappear after cessation of the treatment. In frail individuals, however, the intervention could produce lasting effects and reduce frailty, thus increasing lifespan beyond healthspan. This argument may be supported by longitudinal studies in mice suggesting that the organism state is dynamically unstable, the organism state fluctuations get amplified exponentially at a rate compatible with the mortality rate doubling time, and the effects of transient treatments with life-extending drugs such as rapamycin produce a lasting attenuation of frailty index44. The emergence of chronic diseases out of increasingly unstable fluctuations of the organism state provides the necessary dynamic argument to support the derivation of the Gompertz mortality law in the Strehler–Mildvan theory of aging58. In59,60, the authors suggested that the exponential growth of disease burden observed in the National Population Health Survey of Canadians over 20 y.o. could be explained by an age-related decrease in organism recovery in the face of a constant rate of exposure to environmental stresses. Our study provides evidence suggesting that late in life the organism state dynamics is dominated by features that originate from the proximity of the critical point, corresponding to the vanishing resilience. The exact parameters, such as maximum lifespan, are the results of extrapolations yielding the estimate in the range of 100–150 years. The questions of whether the critical point corresponds to a specific age or even achievable along a realistic trajectory, are not too practical: due to the presence of strong stochastic forces, most individuals escape the attraction basin, lose the resilience and disintegrate into states corresponding to chronic diseases well before reaching the ultimate age. Hence the extrapolation may serve to establish the upper bound on attainable age or the limiting lifespan. We therefore argue, that the loss of resilience cannot be avoided even in the most successfully aging individuals and, therefore, could explain the very high mortality seen in cohorts of super-centennials characterized by the so-called compression of morbidity (late onset of age-related diseases61). Formally, such a state of "zero-resilience" at the critical point corresponds to the absolute zero on the vitality scale in the Strehler–Mildvan theory of aging, thus representing a natural limit on human lifespan. We also note, that very late in life, as the probability of the loss of resilience increases, so should the deviations from Gompertz mortality law. A recent careful analysis of human demographic data supports this argument and yields an estimate for limiting lifespan of 138 years62. The semi-quantitative description of human aging and morbidity proposed here should work well long before the maximum age and belongs to a class of phenomenological models. Whereas it is possible to associate the variation of the organism state measured by DOSI with the effects of stresses or diseases, the data analysis presented here does not provide any mechanistic explanations for the progressive loss of resilience. It is worth to note that the recent study predicts the maximum human lifespan limit from telomere shortening63 that is compatible with the estimations presented here. It would therefore be interesting to see if the resilience loss in human cohorts is associated or even caused by the loss of regenerative capacity due to Hayflick limit. The proximity of the critical point revealed in this work indicates that the apparent human lifespan limit is not likely to be improved by therapies aimed against specific chronic diseases or frailty syndrome. Thus, no dramatic improvement of the maximum lifespan and hence strong life extension is possible by preventing or curing diseases without interception of the aging process, the root cause of the underlying loss of resilience. We do not foresee any laws of nature prohibiting such an intervention. Therefore, further development of the aging model presented in this work may be a step toward experimental demonstration of a dramatic life-extending therapy. Complete blood count datasets NHANES CBC data were retrieved from the category "Complete Blood Count with 5-part Differential - Whole Blood" of Laboratory data for NHANES surveys 1999–2014. Corresponding UKB CBC data fields with related database codes are listed in Supplementary Table 1. The fraction of samples with missing (or filled with zero) CBC data was <0.035% in any studied dataset and those samples were discarded. Differential white blood cell percentages were converted to cell counts by multiplication by 0.01 × White blood ceel count. All CBC parameters were log-transformed and normalized to zero-mean and unit-variance based on data of NHANES participants aged 40 y.o. and older to further carry out PCA and train Cox proportional hazards model. Step counts datasets NHANES step counts per minute records during 1 week were retrieved from the category "Physical Activity Monitor" of Examination data for NHANES 2005–2006 survey. Autocorrelation of log-transformed daily step counts was calculated using data from "Fitbit" devices of 4532 users aged 20–80 y.o. (1601 male and 2892 female). Hazards model The Cox proportional hazards model was trained using NHANES 2015 Public-Use Linked Mortality data. We used CBC data and mortality linked follow-up available for 40,592 NHANES participants aged 18–85 y.o.. NHANES population aged 40–85 y.o. was split randomly into training (12,851 participants) and test (12,883 participants) subsets. Cox model was trained using training subset (6259 male and 6592 female) with 2392 recorded death events during follow-up until the year 2015 (1999–2014 surveys). CBC components and the biological sex label were used as covariates. The model predicted the all-cause mortality well and yielded a concordance index value of CI = 0.68 and CI = 0.67 in NHANES training and test subsets and CI = 0.65 in UKB (samples collected 2007–2011, 218,530 male and 257,965 female participants aged 39–75 y.o., 28,210 recorded death events during follow-up until the year 2020). The Cox proportional hazards model was used as implemented in lifelines package (version 0.25.1) in python. The model was then applied to calculate the hazards ratio for all samples in the GEROLONG, UKB and NHANES cohorts (including individuals younger than 40 y.o.). The DOSI defined as log-hazard ratio of the risk model throughout the manuscript) turned out to be equally well associated with mortality in the NHANES study (HR = 1.43) used for training of the risk model and in the independent UKB study (HR = 1.35; Supplementary Table 2), which was used as a validation dataset. All data analyses were carried out in python 3.8 scripts using libraries NumPy (version 1.18.5), SciPy (version 1.5.2) and Lifelines (version 0.25.1). The most prevalent chronic diseases and health status We quantified the health status of individuals using the sum of major age-related medical conditions (MCQ) that they were diagnosed with, which we termed the CMI. The CMI is similar in spirit to the frailty index suggested for NHANES33. We were not able to use the frailty index because it was based on Questionnaire and Examination data that were not consistent between all NHANES surveys. Also, we did not have enough corresponding data for the UKB dataset. For CMI determination, we followed61 and selected the top 11 morbidities strongly associated with age after the age of 40. The list of health conditions included cancer (any kind), cardiovascular conditions (angina pectoris, coronary heart disease, heart attack, heart failure, stroke, or hypertension), diabetes, arthritis, and emphysema. Notably, we did not include dementia in the list of diseases since it occurs late in life and hence is severely underrepresented in the UKB cohort due to its limited age range. We categorized participants who had more than 6 of those conditions as the "most frail" (CMI > 0.6), and those with CMI < 0.1 as the "non-frail". NHANES data for diagnosis with a health condition and age at diagnosis is available in the questionnaire category "MCQ". Data on diabetes and hypertension was retrieved additionally from questionnaire categories "Diabetes" (DIQ) and "Blood Pressure & Cholesterol", respectively. UK Biobank does not provide aggregated data on these MCQ. Rather, it provides self-reported questionnaire data (UKB, Category 100074) and diagnoses made during hospital in-patient stay according to ICD10 codes (UKB, Category 2002). We aggregated self-reported and ICD10 (block level) data to match that of NHANES for transferability of the results between populations and datasets. We used the following ICD10 codes to cover the health conditions in UK Biobank: hypertension (I10-I15), arthritis (M00-M25), cancer (C00-C99), diabetes (E10-E14), coronary heart disease (I20-I25), myocardial infarction (I21, I22), angina pectoris (I20), stroke (I60-I64), emphysema (J43, J44), and congestive heart failure (I50). Consistently with our previous observations in the NHANES and UKB cohorts, DOSI also increased with age in the longitudinal GEROLONG cohort. The average DOSI level as well as its population variance at any given age were, however, considerably larger than those in the reference "non-frail" groups from the NHANES and UKB studies (see Supplementary Fig. 1a). This difference likely reflects an enrollment bias: many of the GEROLONG blood samples were obtained from patients visiting clinic centers, presumably due to health issues. This could explain why the GEROLONG population appeared generally more frail in terms of DOSI than the reference cohorts of the same age from other studies (Supplementary Fig. 1a, compare the relative positions of the solid blue line and the two dashed lines representing the GEROLONG cohort and the frail cohorts of the NHANES and UKB studies, respectively). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The data that support the findings of this study are available at the NHANES web-site https://www.cdc.gov/nchs/nhanes, at UK Biobank data access procedure described at https://www.ukbiobank.ac.uk/enable-your-research. Additional data are available from the corresponding authors on reasonable request. Mitnitski, A. & Rockwood, K. The rate of aging: the rate of deficit accumulation does not change over the adult life span. Biogerontology 17, 199–204 (2016). PubMed Article PubMed Central Google Scholar Yu, R., Wu, W. C., Leung, J., Hu, S. C. & Woo, J. Frailty and its contributory factors in older adults: a comparison of two asian regions (hong kong and taiwan). Int. J. Environ. Res. Public Health 14, 1096 (2017). PubMed Central Article Google Scholar Aleksandr, Z. et al. Identification of 12 genetic loci associated with human healthspan. Commun. Biol. 2, 1–11 (2019). Podolskiy, D. I., Lobanov, A. V., Kryukov, G. V. & Gladyshev, V. N. Analysis of cancer genomes reveals basic features of human aging and its role in cancer development. Nat. Commun. 7, 12157 (2016). ADS CAS PubMed PubMed Central Article Google Scholar Niccoli, T. & Partridge, L. Ageing as a risk factor for disease. Curr. Biol. 22, R741–R752 (2012). CAS PubMed Article PubMed Central Google Scholar Barzilai, N. & Rennert, G. The rationale for delaying aging and the prevention of age-related diseases. Rambam Maimonides Med. J. 3, e0020 (2012). Dawber, T. R., Meadors, G. F. & Moore, F. E. Jr. Epidemiological approaches to heart disease: the framingham study. Am. J. Public Health Nations Health 41, 279–286 (1951). Sudlow, C. et al. Uk biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015). Brennan, P. et al. Chronic disease research in europe and the need for integrated population cohorts. Eur. J. Epidemiol. 32, 741–749 (2017). Levine, M. E. Modeling the rate of senescence: can estimated biological age predict mortality more accurately than chronological age? J. Gerontol. A Biol. Sci. Med. Sci. 68, 667–674 (2013). Hannum, G. et al. Genome-wide methylation profiles reveal quantitative views of human aging rates. Mol. Cell 49, 359–367 (2013). Horvath, S. DNA methylation age of human tissues and cell types. Genome Biol. 14, R115 (2013). Terrier, P. & Reynard, F. Effect of age on the variability and stability of gait: a cross-sectional treadmill study in healthy individuals between 20 and 69 years of age. Gait Post. 41, 170–174 (2015). Tim, A. et al. Large-scale physical activity data reveal worldwide activity inequality. Nature 547, 336 (2017). Pyrkov, T. V. et al. Quantitative characterization of biological age and frailty based on locomotor activity records. Aging 10, 2973–2990 (2018). Jylhävä, J., Pedersen, N. L. & Hägg, S. Biological Age Predictors. EBioMedicine 21, 29–36 (2017). Gompertz, B. A sketch of an analysis and notation applicable to the value of life contingencies. Philos. Transact. Royal Soc. 110, 214–294 (1820). ADS Article Google Scholar Makeham, W. M. On the law of mortality and construction of annuity tables. Assur. Mag. J. Inst. Actuaries 8, 301–310 (1860). Carlos, L. O., Maria A, B., Linda, P., Manuel, S. & Guido, K. The hallmarks of aging. Cell 153, 1194–1217 (2013). Gijzel, S. M. W. et al. Resilience in clinical care: getting a grip on the recovery potential of older adults. J. Am. Geriatr. Soc. 67, 2650–2657 (2019). Whitson, H. E. et al. Physical resilience in older adults: systematic review and development of an emerging construct. J. Gerontol. Series A: Biomed. Sci. Med. Sci. 71, 489–495 (2015). GM, O. R. et al. Slowing down of recovery as generic risk marker for acute severity transitions in chronic diseases. Crit. Care Med. 44, 601–606 (2016). Marten, S. et al. Early-warning signals for critical transitions. Nature 461, 53–59 (2009). Marten, S. et al. Complex systems: foreseeing tipping points. Nature 467, 411 (2010). ADS Article CAS Google Scholar Emanuela, C. et al. Neutrophil-to-lymphocyte ratio: an emerging marker predicting prognosis in elderly adults with community-acquired pneumonia. J. Am. Geriatr. Soc. 65, 1796–1801 (2017). Ozyurek, B. A. et al. Prognostic value of the neutrophil to lymphocyte ratio (nlr) in lung cancer cases. Asian Pacific J. Cancer Prev. 18, 1417 (2017). Lippi, G., Salvagno, G. L. & Guidi, G. C. Red blood cell distribution width is significantly associated with aging and gender. Clin. Chem. Labor. Med. (CCLM) 52, e197–e199 (2014). Seyhan, E. C. et al. Red blood cell distribution and survival in patients with chronic obstructive pulmonary disease. J. Chron. Obstruct. Pulmonary Dis. 10, 416–424 (2013). Levine, M. E. et al. An epigenetic biomarker of aging for lifespan and healthspan. Aging (Albany NY) 10, 573 (2018). Pyrkov, T. V. et al. Extracting biological age from biomedical data via deep learning: too much of a good thing? Sci. Rep. 8, 5210 (2018). ADS PubMed PubMed Central Article CAS Google Scholar West, G. B., James H, B. & Enquist, B. J. A general model for ontogenetic growth. Nature 413, 628–631 (2001). ADS CAS PubMed Article PubMed Central Google Scholar Cox, D. R. Regression models and life-tables. J. Royal Statistical Soc.: Series B (Methodological) 34, 187–202 (1972). MathSciNet MATH Google Scholar Rockwood, K. et al. A frailty index based on deficit accumulation quantifies mortality risk in humans and in mice. Sci. Rep. 7, 43068 (2017). Blodgett, J., Theou, O., Kirkland, S., Andreou, P. & Rockwood, K. Frailty in nhanes: comparing the frailty index and phenotype. Arch. Gerontol. Geriatr. 60, 464–470 (2015). Gompertz, B. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. Philos. Transact. Royal Soc. Lond. 115, 513–583 (1825). Ganna, A. & Ingelsson, E. 5 year mortality predictors in 498 103 uk biobank participants: a prospective population-based study. Lancet 386, 533–540 (2015). O'donnell, R., D, B., Wilson, S. & Djukanovic, R. Inflammatory cells in the airways in copd. Thorax 61, 448–454 (2006). Bhat, T. et al. Neutrophil to lymphocyte ratio and cardiovascular diseases: a review. Expert Rev. Ocardiovasc. Ther. 11, 55–59 (2013). Taylor, D. H. Jr, Hasselblad, V., Henley, S. J., Thun, M. J. & Sloan, F. A. Benefits of smoking cessation for longevity. Am. J. Public Health 92, 990–996 (2002). Landau, L. D. & Lifshitz, E. M. Physical kinetics, Vol. 10. Course of Theoretical Physics (Butterworth-Heinemann, 1981). Hicks, G. & Miller, R.R. Physiological resilience. In Resilience in Aging, 89–103 (Springer, 2011). Klinedinst, N.J. & Hackney, A. Physiological resilience and the impact on health. In Resilience in Aging, 105–131 (Springer, 2018). Barzel, B. & Barabási, A. L. Universality in network dynamics. Nat. Phys. 9, 673–681 (2013). CAS PubMed Central Article Google Scholar Avchaciov, K. et al. Identification of a blood test-based biomarker of aging through deep learning of aging trajectories in large phenotypic datasets of mice. Preprint at bioRxiv https://doi.org/10.1101/2020.01.23.917286 (2020). Pyrkov, T.V. & Fedichev, P.O. Biological age is a universal marker of aging, stress, and frailty. In Biomarkers of Human Aging, 23–36 (Springer, 2019). Kleiber, M. et al. Body size and metabolism. Hilgardia. 6, 315–353 (1932). West, G. B., James H, B. & Enquist, B. J. A general model for the origin of allometric scaling laws in biology. Science 276, 122–126 (1997). Kim, Y., Kim, E. S., Yu, S. Y. & Kwak, H. W. Age-related clinical outcome after macular hole surgery. Retina 37, 80–87 (2017). Mossey, J. M., Mutran, E., Knott, K. & Craik, R. Determinants of recovery 12 months after hip fracture: the importance of psychosocial factors. Am. J. Public Health 79, 279–286 (1989). Koval, K. J., Skovron, M. L., Aharonoff, G. B. & Zuckerman, J. D. Predictors of functional recovery after hip fracture in the elderly. Clin. Orthopaedics Related Res. 1, 22–28 (1998). Artinian, N. T., Duggan, C. & Miller, P. Age differences in patient recovery patterns following coronary artery bypass surgery. Am. J. Crit. Care 2, 453–461 (1993). Thompson, J. Y. et al. Prognostic factors for recovery following acute lateral ankle ligament sprain: a systematic review. BMC Musculoskeletal Disord. 18, 421 (2017). Yanai, H., Budovsky, A., Tacutu, R. & Fraifeld, V. E. Is rate of skin wound healing associated with aging or longevity phenotype? Biogerontology 12, 591–597 (2011). Belsky, D. W. et al. Quantification of biological aging in young adults. Proc. Natl Acad. Sci. 112, E4104–E4110 (2015). Alpert, A. et al. A clinically meaningful metric of immune age derived from high-dimensional longitudinal monitoring. Nat. Med. 25, 487 (2019). Sara, A. Personal aging markers and ageotypes revealed by deep longitudinal profiling. Nat. Med. 26, 83–90 (2020). Tindle, H. A. et al. Lifetime smoking history and risk of lung cancer: Results from the framingham heart study. J. Natl Cancer Inst. 110, 1201–1207 (2018). Strehler, B. L. & Mildvan, A. S. General theory of mortality and aging. Science 132, 14–21 (1960). Mitnitski, A., Song, X. & Rockwood, K. Assessing biological aging: the origin of deficit accumulation. Biogerontology 14, 709–717 (2013). Mitnitski, A. & Rockwood, K. Aging as a process of deficit accumulation: its utility and origin. In Aging and Health-A Systems Biology Perspective, Vol 40, 85–98 (Karger Publishers, 2015). Andersen, S. L., Sebastiani, P., Dworkis, D. A., Feldman, L. & Perls, T. T. Health span approximates life span among many supercentenarians: compression of morbidity at the approximate limit of life span. J. Gerontol. Series A: Biomed. Sci. Med. Sci. 67, 395–405 (2012). Podolskiy, D. I. et al. The landscape of longevity across phylogeny. Preprint at bioRxiv https://doi.org/10.1101/2020.03.17.995993 (2020). Whittemore, K., Vera, E., Martínez-Nevado, E., Sanpera, C. & Blasco, M. A. Telomere shortening rate predicts species life span. Proc. Natl Acad. Sci. 116, 15122–15127 (2019). This research has been conducted using data from UK Biobank, a major biomedical database (UK Biobank website: www.ukbiobank.ac.uk; UK Biobank project ID 21988). Gero PTE, Singapore, Singapore Timothy V. Pyrkov, Konstantin Avchaciov, Andrei E. Tarkhov, Leonid I. Menshikov & Peter O. Fedichev Skolkovo Institute of Science and Technology, Moscow, Russia Andrei E. Tarkhov Moscow Institute of Physics and Technology, Dolgoprudny, Moscow Region, Russia Andrei E. Tarkhov & Peter O. Fedichev National Research Center 'Kurchatov Institute', Moscow, Russia Leonid I. Menshikov Roswell Park Comprehensive Cancer Center, Elm and Carlton streets, Buffalo, NY, USA Andrei V. Gudkov Genome Protection, Inc., Buffalo, NY, USA Timothy V. Pyrkov Konstantin Avchaciov Peter O. Fedichev T.V.P., L.I.M., A.V.G., and P.O.F. designed the study and analyzed the results. T.V.P., K.A., A.E.T. and P.O.F. performed calculations and data analysis. All authors discussed the results, wrote and reviewed the paper. Correspondence to Timothy V. Pyrkov or Peter O. Fedichev. P.O.F. is a shareholder of Gero PTE. A.G. is a member of Gero PTE Advisory Board. T.V.P., A.E.T., K.A., L.I.M., and P.O.F. are employees of Gero PTE. The study was funded by Gero PTE. Peer review information Nature Communications thanks Karen Bandeen-Roche, Andrzej Bartke, and Marten Scheffer for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Pyrkov, T.V., Avchaciov, K., Tarkhov, A.E. et al. Longitudinal analysis of blood markers reveals progressive loss of resilience and predicts human lifespan limit. Nat Commun 12, 2765 (2021). https://doi.org/10.1038/s41467-021-23014-1 Impact of COVID-19 on life expectancy at birth in India: a decomposition analysis Suryakant Yadav Pawan Kumar Yadav Neha Yadav BMC Public Health (2021) Genetic and phenotypic analysis of the causal relationship between aging and COVID-19 Kejun Ying Ranran Zhai Vadim N. Gladyshev Communications Medicine (2021) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Editorial Values Statement Journal Impact Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Profillic Models, code, and papers for "Rodrigo F": Are pre-trained CNNs good feature extractors for anomaly detection in surveillance videos? Tiago S. Nazare, Rodrigo F. de Mello, Moacir A. Ponti Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance. Click for Model/Code and Paper Ego-Lane Analysis System (ELAS): Dataset and Algorithms Rodrigo F. Berriel, Edilson de Aguiar, Alberto F. de Souza, Thiago Oliveira-Santos Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research detection, estimation, and tracking in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes (i.e., immediate left and right lanes) presence. In this paper, we propose a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines with Kalman filter and spline with particle filter). Based on the estimated lane, all other events are detected. To validate ELAS and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e., lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications. * Image and Vision Computing 68 (2017) 64-75 * 13 pages, 17 figures, github.com/rodrigoberriel/ego-lane-analysis-system, and published by Image and Vision Computing (IMAVIS) Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classification: A Deep Learning Approach Rodrigo F. Berriel, Franco Schmidt Rossi, Alberto F. de Souza, Thiago Oliveira-Santos Correctly identifying crosswalks is an essential task for the driving activity and mobility autonomy. Many crosswalk classification, detection and localization systems have been proposed in the literature over the years. These systems use different perspectives to tackle the crosswalk classification problem: satellite imagery, cockpit view (from the top of a car or behind the windshield), and pedestrian perspective. Most of the works in the literature are designed and evaluated using small and local datasets, i.e. datasets that present low diversity. Scaling to large datasets imposes a challenge for the annotation procedure. Moreover, there is still need for cross-database experiments in the literature because it is usually hard to collect the data in the same place and conditions of the final application. In this paper, we present a crosswalk classification system based on deep learning. For that, crowdsourcing platforms, such as OpenStreetMap and Google Street View, are exploited to enable automatic training via automatic acquisition and annotation of a large-scale database. Additionally, this work proposes a comparison study of models trained using fully-automatic data acquisition and annotation against models that were partially annotated. Cross-database experiments were also included in the experimentation to show that the proposed methods enable use with real world applications. Our results show that the model trained on the fully-automatic database achieved high overall accuracy (94.12%), and that a statistically significant improvement (to 96.30%) can be achieved by manually annotating a specific part of the database. Finally, the results of the cross-database experiments show that both models are robust to the many variations of image and scenarios, presenting a consistent behavior. * Computers & Graphics, 2017, vol. 68, pp. 32-42 * 13 pages, 13 figures, 3 videos, and GitHub with models Deep Learning Based Large-Scale Automatic Satellite Crosswalk Classification Rodrigo F. Berriel, Andre Teixeira Lopes, Alberto F. de Souza, Thiago Oliveira-Santos High-resolution satellite imagery have been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Even though, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this dataset is used to train deep-learning-based models in order to accurately classify satellite images that contains or not zebra crossings. A novel dataset with more than 240,000 images from 3 continents, 9 countries and more than 20 cities was used in the experiments. Experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale. * 5 pages, 3 figures, accepted by IEEE Geoscience and Remote Sensing Letters Acoustic Modeling Using a Shallow CNN-HTSVM Architecture Christopher Dane Shulby, Martha Dais Ferreira, Rodrigo F. de Mello, Sandra Maria Aluisio High-accuracy speech recognition is especially challenging when large datasets are not available. It is possible to bridge this gap with careful and knowledge-driven parsing combined with the biologically inspired CNN and the learning guarantees of the Vapnik Chervonenkis (VC) theory. This work presents a Shallow-CNN-HTSVM (Hierarchical Tree Support Vector Machine classifier) architecture which uses a predefined knowledge-based set of rules with statistical machine learning techniques. Here we show that gross errors present even in state-of-the-art systems can be avoided and that an accurate acoustic model can be built in a hierarchical fashion. The CNN-HTSVM acoustic model outperforms traditional GMM-HMM models and the HTSVM structure outperforms a MLP multi-class classifier. More importantly we isolate the performance of the acoustic model and provide results on both the frame and phoneme level considering the true robustness of the model. We show that even with a small amount of data accurate and robust recognition rates can be obtained. * Pre-review version of Bracis 2017 Counterexample Guided Inductive Optimization Applied to Mobile Robots Path Planning (Extended Version) Rodrigo F. Araújo, Alexandre Ribeiro, Iury V. Bessa, Lucas C. Cordeiro, João E. C. Filho We describe and evaluate a novel optimization-based off-line path planning algorithm for mobile robots based on the Counterexample-Guided Inductive Optimization (CEGIO) technique. CEGIO iteratively employs counterexamples generated from Boolean Satisfiability (SAT) and Satisfiability Modulo Theories (SMT) solvers, in order to guide the optimization process and to ensure global optimization. This paper marks the first application of CEGIO for planning mobile robot path. In particular, CEGIO has been successfully applied to obtain optimal two-dimensional paths for autonomous mobile robots using off-the-shelf SAT and SMT solvers. * 7 pages, 14rd Latin American Robotics Symposium (LARS'2017) Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data Jacson Rodrigues Correia-Silva, Rodrigo F. Berriel, Claudine Badue, Alberto F. de Souza, Thiago Oliveira-Santos In the past few years, Convolutional Neural Networks (CNNs) have been achieving state-of-the-art performance on a variety of problems. Many companies employ resources and money to generate these models and provide them as an API, therefore it is in their best interest to protect them, i.e., to avoid that someone else copies them. Recent studies revealed that state-of-the-art CNNs are vulnerable to adversarial examples attacks, and this weakness indicates that CNNs do not need to operate in the problem domain (PD). Therefore, we hypothesize that they also do not need to be trained with examples of the PD in order to operate in it. Given these facts, in this paper, we investigate if a target black-box CNN can be copied by persuading it to confess its knowledge through random non-labeled data. The copy is two-fold: i) the target network is queried with random data and its predictions are used to create a fake dataset with the knowledge of the network; and ii) a copycat network is trained with the fake dataset and should be able to achieve similar performance as the target network. This hypothesis was evaluated locally in three problems (facial expression, object, and crosswalk classification) and against a cloud-based API. In the copy attacks, images from both non-problem domain and PD were used. All copycat networks achieved at least 93.7% of the performance of the original models with non-problem domain data, and at least 98.6% using additional data from the PD. Additionally, the copycat CNN successfully copied at least 97.3% of the performance of the Microsoft Azure Emotion API. Our results show that it is possible to create a copycat CNN by simply querying a target network as black-box with random non-labeled data. * 8 pages, 3 figures, accepted by IJCNN 2018 Counterexample Guided Inductive Optimization Rodrigo F. Araujo, Higo F. Albuquerque, Iury V. de Bessa, Lucas C. Cordeiro, Joao Edgar C. Filho This paper describes three variants of a counterexample guided inductive optimization (CEGIO) approach based on Satisfiability Modulo Theories (SMT) solvers. In particular, CEGIO relies on iterative executions to constrain a verification procedure, in order to perform inductive generalization, based on counterexamples extracted from SMT solvers. CEGIO is able to successfully optimize a wide range of functions, including non-linear and non-convex optimization problems based on SMT solvers, in which data provided by counterexamples are employed to guide the verification engine, thus reducing the optimization domain. The present algorithms are evaluated using a large set of benchmarks typically employed for evaluating optimization techniques. Experimental results show the efficiency and effectiveness of the proposed algorithms, which find the optimal solution in all evaluated benchmarks, while traditional techniques are usually trapped by local minima. General Fragment Model for Information Artifacts Sandro Rama Fiorini, Wallas Sousa dos Santos, Rodrigo Costa Mesquita, Guilherme Ferreira Lima, Marcio F. Moreno The use of semantic descriptions in data intensive domains require a systematic model for linking semantic descriptions with their manifestations in fragments of heterogeneous information and data objects. Such information heterogeneity requires a fragment model that is general enough to support the specification of anchors from conceptual models to multiple types of information artifacts. While diverse proposals of anchoring models exist in the literature, they are usually focused in audiovisual information. We propose a generalized fragment model that can be instantiated to different kinds of information artifacts. Our objective is to systematize the way in which fragments and anchors can be described in conceptual models, without committing to a specific vocabulary. Hybrid Model For Word Prediction Using Naive Bayes and Latent Information Henrique X. Goulart, Mauro D. L. Tosi, Daniel Soares Gonçalves, Rodrigo F. Maia, Guilherme A. Wachs-Lopes Historically, the Natural Language Processing area has been given too much attention by many researchers. One of the main motivation beyond this interest is related to the word prediction problem, which states that given a set words in a sentence, one can recommend the next word. In literature, this problem is solved by methods based on syntactic or semantic analysis. Solely, each of these analysis cannot achieve practical results for end-user applications. For instance, the Latent Semantic Analysis can handle semantic features of text, but cannot suggest words considering syntactical rules. On the other hand, there are models that treat both methods together and achieve state-of-the-art results, e.g. Deep Learning. These models can demand high computational effort, which can make the model infeasible for certain types of applications. With the advance of the technology and mathematical models, it is possible to develop faster systems with more accuracy. This work proposes a hybrid word suggestion model, based on Naive Bayes and Latent Semantic Analysis, considering neighbouring words around unfilled gaps. Results show that this model could achieve 44.2% of accuracy in the MSR Sentence Completion Challenge. Adaptive Modulation and Coding based on Reinforcement Learning for 5G Networks Mateus P. Mota, Daniel C. Araujo, Francisco Hugo Costa Neto, Andre L. F. de Almeida, F. Rodrigo P. Cavalcanti We design a self-exploratory reinforcement learning (RL) framework, based on the Q-learning algorithm, that enables the base station (BS) to choose a suitable modulation and coding scheme (MCS) that maximizes the spectral efficiency while maintaining a low block error rate (BLER). In this framework, the BS chooses the MCS based on the channel quality indicator (CQI) reported by the user equipment (UE). A transmission is made with the chosen MCS and the results of this transmission are converted by the BS into rewards that the BS uses to learn the suitable mapping from CQI to MCS. Comparing with a conventional fixed look-up table and the outer loop link adaptation, the proposed framework achieves superior performance in terms of spectral efficiency and BLER. * Accepted for presentation at the IEEE GLOBECOM 2019 Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night Vinicius F. Arruda, Thiago M. Paixão, Rodrigo F. Berriel, Alberto F. De Souza, Claudine Badue, Nicu Sebe, Thiago Oliveira-Santos Deep learning techniques have enabled the emergence of state-of-the-art models to address object detection tasks. However, these techniques are data-driven, delegating the accuracy to the training dataset which must resemble the images in the target task. The acquisition of a dataset involves annotating images, an arduous and expensive process, generally requiring time and manual effort. Thus, a challenging scenario arises when the target domain of application has no annotated dataset available, making tasks in such situation to lean on a training dataset of a different domain. Sharing this issue, object detection is a vital task for autonomous vehicles where the large amount of driving scenarios yields several domains of application requiring annotated data for the training process. In this work, a method for training a car detection system with annotated data from a source domain (day images) without requiring the image annotations of the target domain (night images) is presented. For that, a model based on Generative Adversarial Networks (GANs) is explored to enable the generation of an artificial dataset with its respective annotations. The artificial dataset (fake dataset) is created translating images from day-time domain to night-time domain. The fake dataset, which comprises annotated images of only the target domain (night images), is then used to train the car detector model. Experimental results showed that the proposed method achieved significant and consistent improvements, including the increasing by more than 10% of the detection performance when compared to the training with only the available annotated data (i.e., day images). * 8 pages, 8 figures, https://github.com/viniciusarruda/cross-domain-car-detection and accepted at IJCNN 2019 Effortless Deep Training for Traffic Sign Detection Using Templates and Arbitrary Natural Images Lucas Tabelini Torres, Thiago M. Paixão, Rodrigo F. Berriel, Alberto F. De Souza, Claudine Badue, Nicu Sebe, Thiago Oliveira-Santos Deep learning has been successfully applied to several problems related to autonomous driving. Often, these solutions rely on large networks that require databases of real image samples of the problem (i.e., real world) for proper training. The acquisition of such real-world data sets is not always possible in the autonomous driving context, and sometimes their annotation is not feasible (e.g., takes too long or is too expensive). Moreover, in many tasks, there is an intrinsic data imbalance that most learning-based methods struggle to cope with. It turns out that traffic sign detection is a problem in which these three issues are seen altogether. In this work, we propose a novel database generation method that requires only (i) arbitrary natural images, i.e., requires no real image from the domain of interest, and (ii) templates of the traffic signs, i.e., templates synthetically created to illustrate the appearance of the category of a traffic sign. The effortlessly generated training database is shown to be effective for the training of a deep detector (such as Faster R-CNN) on German traffic signs, achieving 95.66% of mAP on average. In addition, the proposed method is able to detect traffic signs with an average precision, recall and F1-score of about 94%, 91% and 93%, respectively. The experiments surprisingly show that detectors can be trained with simple data generation methods and without problem domain data for the background, which is in the opposite direction of the common sense for deep learning. Traffic Light Recognition Using Deep Learning and Prior Maps for Autonomous Cars Lucas C. Possatti, Rânik Guidolini, Vinicius B. Cardoso, Rodrigo F. Berriel, Thiago M. Paixão, Claudine Badue, Alberto F. De Souza, Thiago Oliveira-Santos Autonomous terrestrial vehicles must be capable of perceiving traffic lights and recognizing their current states to share the streets with human drivers. Most of the time, human drivers can easily identify the relevant traffic lights. To deal with this issue, a common solution for autonomous cars is to integrate recognition with prior maps. However, additional solution is required for the detection and recognition of the traffic light. Deep learning techniques have showed great performance and power of generalization including traffic related problems. Motivated by the advances in deep learning, some recent works leveraged some state-of-the-art deep detectors to locate (and further recognize) traffic lights from 2D camera images. However, none of them combine the power of the deep learning-based detectors with prior maps to recognize the state of the relevant traffic lights. Based on that, this work proposes to integrate the power of deep learning-based detection with the prior maps used by our car platform IARA (acronym for Intelligent Autonomous Robotic Automobile) to recognize the relevant traffic lights of predefined routes. The process is divided in two phases: an offline phase for map construction and traffic lights annotation; and an online phase for traffic light recognition and identification of the relevant ones. The proposed system was evaluated on five test cases (routes) in the city of Vit\'oria, each case being composed of a video sequence and a prior map with the relevant traffic lights for the route. Results showed that the proposed technique is able to correctly identify the relevant traffic light along the trajectory. * Accepted in 2019 International Joint Conference on Neural Networks (IJCNN) On the Shattering Coefficient of Supervised Learning Algorithms Rodrigo Fernandes de Mello The Statistical Learning Theory (SLT) provides the theoretical background to ensure that a supervised algorithm generalizes the mapping $f: \mathcal{X} \to \mathcal{Y}$ given $f$ is selected from its search space bias $\mathcal{F}$. This formal result depends on the Shattering coefficient function $\mathcal{N}(\mathcal{F},2n)$ to upper bound the empirical risk minimization principle, from which one can estimate the necessary training sample size to ensure the probabilistic learning convergence and, most importantly, the characterization of the capacity of $\mathcal{F}$, including its under and overfitting abilities while addressing specific target problems. In this context, we propose a new approach to estimate the maximal number of hyperplanes required to shatter a given sample, i.e., to separate every pair of points from one another, based on the recent contributions by Har-Peled and Jones in the dataset partitioning scenario, and use such foundation to analytically compute the Shattering coefficient function for both binary and multi-class problems. As main contributions, one can use our approach to study the complexity of the search space bias $\mathcal{F}$, estimate training sample sizes, and parametrize the number of hyperplanes a learning algorithm needs to address some supervised task, what is specially appealing to deep neural networks. Experiments were performed to illustrate the advantages of our approach while studying the search space $\mathcal{F}$ on synthetic and one toy datasets and on two widely-used deep learning benchmarks (MNIST and CIFAR-10). In order to permit reproducibility and the use of our approach, our source code is made available at~\url{https://bitbucket.org/rodrigo_mello/shattering-rcode}. An empirical evaluation of imbalanced data strategies from a practitioner's point of view Jacques Wainer, Rodrigo A. Franceschinell This research tested the following well known strategies to deal with binary imbalanced data on 82 different real life data sets (sampled to imbalance rates of 5%, 3%, 1%, and 0.1%): class weight, SMOTE, Underbagging, and a baseline (just the base classifier). As base classifiers we used SVM with RBF kernel, random forests, and gradient boosting machines and we measured the quality of the resulting classifier using 6 different metrics (Area under the curve, Accuracy, F-measure, G-mean, Matthew's correlation coefficient and Balanced accuracy). The best strategy strongly depends on the metric used to measure the quality of the classifier. For AUC and accuracy class weight and the baseline perform better; for F-measure and MCC, SMOTE performs better; and for G-mean and balanced accuracy, underbagging. Computing the Shattering Coefficient of Supervised Learning Algorithms Rodrigo Fernandes de Mello, Moacir Antonelli Ponti, Carlos Henrique Grossi Ferreira The Statistical Learning Theory (SLT) provides the theoretical guarantees for supervised machine learning based on the Empirical Risk Minimization Principle (ERMP). Such principle defines an upper bound to ensure the uniform convergence of the empirical risk Remp(f), i.e., the error measured on a given data sample, to the expected value of risk R(f) (a.k.a. actual risk), which depends on the Joint Probability Distribution P(X x Y) mapping input examples x in X to class labels y in Y. The uniform convergence is only ensured when the Shattering coefficient N(F,2n) has a polynomial growing behavior. This paper proves the Shattering coefficient for any Hilbert space H containing the input space X and discusses its effects in terms of learning guarantees for supervised machine algorithms. Learning Representations and Agents for Information Retrieval Rodrigo Nogueira A goal shared by artificial intelligence and information retrieval is to create an oracle, that is, a machine that can answer our questions, no matter how difficult they are. A more limited, but still instrumental, version of this oracle is a question-answering system, in which an open-ended question is given to the machine, and an answer is produced based on the knowledge it has access to. Such systems already exist and are increasingly capable of answering complicated questions. This progress can be partially attributed to the recent success of machine learning and to the efficient methods for storing and retrieving information, most notably through web search engines. One can imagine that this general-purpose question-answering system can be built as a billion-parameters neural network trained end-to-end with a large number of pairs of questions and answers. We argue, however, that although this approach has been very successful for tasks such as machine translation, storing the world's knowledge as parameters of a learning machine can be very hard. A more efficient way is to train an artificial agent on how to use an external retrieval system to collect relevant information. This agent can leverage the effort that has been put into designing and running efficient storage and retrieval systems by learning how to best utilize them to accomplish a task. ... Expressive mechanisms for equitable rent division on a budget Rodrigo A. Velez We design envy-free mechanisms for the allocation of rooms and rent payments among roommates. We achieve four objectives: (1) each agent is allowed to make a report that expresses her preference about violating her budget constraint, a feature not achieved by mechanisms that only elicit quasi-linear reports; (2) these reports are finite dimensional; (3) computation is feasible in polynomial time; and (4) incentive properties of envy-free mechanisms that elicit quasi-linear reports are preserved. A framework for fake review detection in online consumer electronics retailers Rodrigo Barbado, Oscar Araque, Carlos A. Iglesias The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82% F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. * Information Processing & Management, ISSN 0306-4573, Volume 56, Issue 4, July 2019 * Information Processing & Management, 11 pages © Profillic 2020
CommonCrawl
Hypotheses, Theories, and Laws Assignment thegreatMessia Terms in this set (8) Use the drop-down menus to complete the statements. A is a factual statement about how something will behave. A is a well-tested explanation of a set of observations. A is a possible explanation of a scientific question that is testable. 2. theory 3. hypothesis For which of these questions could a testable hypothesis be developed? Check all that apply. Does the width of a rubber band affect how far it will stretch? How does the thickness of a material affect insulation? Do all objects fall to the ground at the same speed? Which statements describe a good hypothesis? Check all that apply. A good hypothesis can be tested. A good hypothesis leads to a test with measureable results. A good hypothesis provides a possible explanation to answer a scientific question. Identify the variables in this hypothesis. If an object's mass is increased, then the object's acceleration will decrease because the object will be more resistant to change in motion. The independent variable is. The dependent variable is 1. mass 2. acceleration Yosef is playing with different kinds of rubber bands. Some are very narrow and some are quite wide. Yosef is curious about the rubber bands, and develops this scientific question: Does the width of a rubber band affect how easily it can be stretched? He decides to develop a hypothesis to test this scientific question. What could Yosef's hypothesis be? If the width of a rubber band is increased, then it will be more difficult to stretch because more force will be needed to stretch it. Explain why and how theories may be changed or replaced over time. Theories may be changed over time as new information is discovered or new technologies are developed. New developments lead to changes in experimental methods, which provide information that may or may not support the existing theory. If the theory is still supported, it may be updated. If it is not supported but the results are true and relevant, the theory may be replaced. What happens when new information from an experiment produces results that partially support a theory? The theory is revised or updated. Explain why a law is accepted as fact, but a theory is not. Theories are not accepted as fact because new information or technology can show that the theory is incomplete or incorrect. A law is accepted as fact because it is a statement of what will happen and no exceptions have ever been found. For hemorrhage to be reported, there does not have to be active bleeding; however, there must be documentation in the medical record indicating active bleeding has occurred and the source of the bleeding must be identified. CPT codes tell the insurance carrier what brought the patient to the physician's office Which typeface is used for ICD-10-CM tabular list exclusion notes and to identify manifestation codes, which are never reported as the first-listed diagnoses? A category in ICD 10 CM is how many characters? Introduction to Chemical Reactions quizlette8885106 Changes of State Assignment zimmurbunz Chemical Properties Assignment Balancing Chemical Equations Assignment anpe2023 katie_gerbasi94 dai_dai39 Hypotheses, Theories, and Laws mari_jean Scientific Inquiry chyenne_hawkins98 PHL 206 Midterm Exam Evaluating Scientific Explanations Assignment Evaluating Scientific Explanations Quiz Accuracy and Precision Quiz Possible Final Exam Questions Tristan_Kramer History-RandomFacts-Test 2 kmaghzi431 Comp Apps Final Review pt.1 lmy24 Ch 19: Lungs Show that if X has a countable basis, a collection $\mathcal{A}$ of subsets of X is countably locally finite if and only if it is countable. A small aerospace company is considering eight projects: Project 1: Develop an automated test facility. Project 2: Barcode all company inventory and machinery. Project 3: Introduce a CAD/CAM system. Project 4: Buy a new lathe and deburring system. Project 5: Institute FMS (flexible manufacturing system). Project 6: Install a LAN (local area network). Project 7: Develop AIS (artificial intelligence simulation). Project 8: Set up a TQM (total quality management) initiative. Each project has been rated on five attributes: return on investment (ROI), cost, productivity improvement, worker requirements, and degree of technological risk. These ratings are given in Table 67. The company has set the following five goals (listed in order of priority: Goal 1: Achieve a return on investment of at least $3,250. Goal 2: Limit cost to$1,300. Goal 3: Achieve a productivity improvement of at least 6. Goal 4: Limit manpower use to 108. Goal 5: Limit technological risk to a total of 4. Use preemptive goal programming to determine which projects should be undertaken. TABLE 67: $$ \begin{matrix} \text{ } & \text{Project}\\ \text{ } & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} & \text{7} & \text{8}\\ \text{ROI (\$)} & \text{2,070} & \text{456} & \text{670} & \text{350} & \text{495} & \text{380} & \text{1,500} & \text{480}\\ \text{Cost (\$)} & \text{900} & \text{240} & \text{335} & \text{700} & \text{410} & \text{190} & \text{500} & \text{160}\\ \text{Productivity improvement} & \text{3} & \text{2} & \text{2} & \text{0} & \text{1} & \text{0} & \text{3} & \text{2}\\ \text{Manpower needed} & \text{18} & \text{18} & \text{27} & \text{36} & \text{42} & \text{6} & \text{48} & \text{24}\\ \text{Degree of risk} & \text{3} & \text{2} & \text{4} & \text{1} & \text{1} & \text{0} & \text{2} & \text{3}\\ \end{matrix} $$ Describe the region of the unit sphere covered by the image of the Gauss map of the following surfaces: a. Paraboloid of revolution $z=x^{2}+y^{2}.$ b. Hyperboloid of revolution $x^{2}+y^{2}-z^{2}=1.$ c. Catenoid $x^{2}+y^{2}=\cosh ^{2} z.$ Use Euler's theorem to establish the following: (a) For any integer $a, a^{37} \equiv a(\bmod 1729) .$ [Hint: $1729=7 \cdot 13 \cdot 19.1$ (b) For any integer $a, a^{13} \equiv a(\bmod 2730)$ [Hint: $2730=2 \cdot 3 \cdot 5 \cdot 7 \cdot 13.1$ (c) For any odd integer $a, a^{33} \equiv a(\bmod 4080)$ [Hint: $4080=15 \cdot 16 \cdot 17 . ]$
CommonCrawl
Determinants of subnational disparities in antenatal care utilisation: a spatial analysis of demographic and health survey data in Kenya Kefa G. Wairoto ORCID: orcid.org/0000-0001-8362-36301, Noel K. Joseph ORCID: orcid.org/0000-0002-0509-13731, Peter M. Macharia ORCID: orcid.org/0000-0003-3410-18811 & Emelda A. Okiro ORCID: orcid.org/0000-0001-9543-83601,2 BMC Health Services Research volume 20, Article number: 665 (2020) Cite this article The spatial variation in antenatal care (ANC) utilisation is likely associated with disparities observed in maternal and neonatal deaths. Most maternal deaths are preventable through services offered during ANC; however, estimates of ANC coverage at lower decision-making units (sub-county) is mostly lacking. In this study, we aimed to estimate the coverage of at least four ANC (ANC4) visits at the sub-county level using the 2014 Kenya Demographic and Health Survey (KDHS 2014) and identify factors associated with ANC utilisation in Kenya. Data from the KDHS 2014 was used to compute sub-county estimates of ANC4 using small area estimation (SAE) techniques which relied on spatial relatedness to yield precise and reliable estimates at each of the 295 sub-counties. Hierarchical mixed-effect logistic regression was used to identify factors influencing ANC4 utilisation. Sub-county estimates of factors significantly associated with ANC utilisation were produced using SAE techniques and mapped to visualise disparities. The coverage of ANC4 across sub-counties was heterogeneous, ranging from a low of 17% in Mandera West sub-county to over 77% in Nakuru Town West and Ruiru sub-counties. Thirty-one per cent of the 295 sub-counties had coverage of less than 50%. Maternal education, household wealth, place of delivery, marital status, age at first marriage, and birth order were all associated with ANC utilisation. The areas with low ANC4 utilisation rates corresponded to areas of low socioeconomic status, fewer educated women and a small number of health facility deliveries. Suboptimal coverage of ANC4 and its heterogeneity at sub-county level calls for urgent, focused and localised approaches to improve access to antenatal care services. Policy formulation and resources allocation should rely on data-driven strategies to guide national and county governments achieve equity in access and utilisation of health interventions. Approximately 0.3 million maternal deaths and 2.6 million stillbirths occurred globally in 2015 with sub-Saharan Africa (SSA) accounting for most of these deaths at 66% and 40% respectively [1, 2]. Between 30 to 50% of maternal mortality is due to inadequate care during pregnancy, while two-thirds of stillbirths are antepartum caused by maternal infections and pregnancy complications [3]. These deaths are preventable through services offered during antenatal care (ANC) [3, 4]. ANC visits are aimed at improving triage and timely referral of high-risk women and include educational components and should ideally avert most health complications that may affect the mother or the newborn [4]. Until 2016, the World Health Organization (WHO) recommended at least four ANC visits later revised to eight visits in line with new evidence supporting improved safety during pregnancy through increased frequency of maternal and fetal assessment shown to be associated with a reduced likelihood of perinatal deaths [5, 6]. Countries have routinely monitored the coverage of ANC utilisation and its predictors at national and regional levels through household sample surveys [7]. Typically two ANC coverage indicators are monitored, ANC1 defined as the proportion of women aged 15–49 years who received ANC services provided by a skilled birth attendant (doctor, nurse or midwife) at least once during pregnancy and ANC4 for those who attended four or more visits [8]. In SSA, only 80% of pregnant women accessed ANC1, and only 52% received ANC4 in 2018 [8]. The timing (initiation of first ANC visit) is also monitored and plays a crucial role in determining the completion of the recommended visits. Tracking coverage at global, regional or country-level is essential for macro-level comparisons. However, analysis at this level obscures significant variations within a country, popularly known as "masking the unfinished health agenda" [9]. The Sustainable Development Goals (SDGs) enshrines health equity based on its fundamental principle of leaving no one behind and with a focus on reaching those who are most marginalised first [10, 11]. Lack of data powered to provide precise and reliable estimates at units of decision-making hinders the description of the subnational heterogeneities [12]. Recent advancement in mapping and statistical techniques have allowed mapping of child survival and its determinants at a fine spatial resolution [13,14,15,16]. However, the variation of ANC utilisation and its predictors at lower geographical units of decision making remains imperfectly described in Kenya to facilitate policy formulation and targeted interventions [17]. In the current study, we leverage on small area estimation (SAE) techniques to map ANC4 utilisation at the sub-county level and identify factors affecting ANC utilisation using the data from the Kenya demographic and health survey conducted in 2014 (KDHS 2014). Country context The Millennium Development Goals (MDGs) era saw Kenya make substantial gains in maternal and newborn health. Following an increase in maternal mortality in the 1990s, the trend was reversed with a 39% reduction in maternal mortality rates (MMR) from 590 per 100,000 live births in 1998 to 362 in 2014 [18]. The units of administration and health planning were revised to 47 counties in 2013 when Kenya adopted a decentralised system of governance (Fig. 1 and additional File 1) [19, 20], and are further divided into 295 sub-counties (Fig. 1 and Additional File 1). Kenya's health sector is pluralistic with governmental, non-governmental and privately managed health facilities. The structure of service delivery is hierarchical with six tiers, namely community level followed by dispensaries, health centres, primary referral, secondary referral, and tertiary facilities. The map of Kenya showing 47 counties (colored) and 295 sub-counties (numbered). The extents of major lakes and the Indian Ocean are shown in light blue. The names of the counties and sub-counties corresponding to the displayed numbers are presented in Additional file 1. Source: author generated map There are over 11,000 health facilities in Kenya with about 6000 public health facilities managed by either the ministry of health, local authorities, faith-based organisations and non-governmental organisations capable of offering general health services to the public [21,22,23,24]. ANC services are available through these health facilities. Since independence, the government of Kenya has made substantial progress in making healthcare services affordable and accessible to women and children by putting into place different policies affecting access and utilisation [25,26,27,28,29,30,31,32,33,34]. Since 2013, all services at government outpatient facilities and maternity services have been offered free of charge [35, 36]. ANC utilisation, socioeconomic, and demographic data on pregnant women were obtained from KDHS 2014. The survey employed a two-stage sampling design on a national sampling frame of 5360 clusters. One thousand six hundred and twelve (1612) clusters were selected with equal probability, 995 in urban and 617 in rural areas. In the second stage, 40,300 households were selected. Additional data on high-resolution travel time to the nearest health facility were obtained from a study by Alegana et al., 2018 [37]. In brief, travel time to the nearest public health facility was computed based on a cost distance algorithm while factoring in different models of transport and travelling speeds [37]. The method calculates the cumulative travel time associated with travelling from a cluster to the nearest health facility along the shortest possible route. Each DHS cluster was assigned a travel time based on its average time on 2 km (urban) or 5 km (rural) buffer to minimise the random displacement of DHS survey clusters [38,39,40]. Based on a review of literature assessing the association between ANC use and its determinants [38, 41,42,43,44,45,46], candidate variables were abstracted from KDHS 2014. They included maternal education, birth order, household wealth, household residence type, marital status, ethnicity, parity, age at first marriage/cohabitation, place of delivery, sex of household head, religion, maternal age and time to the nearest health facility [37]. Factors associated with ANC4 utilisation Univariate regression models were used to assess the crude association between each of the determinants and ANC4 utilisation. Variables were included in the multivariate modelling stage if the p-value was less than 0.20. Multi-collinearity among predictors was assessed using variance inflation factors (VIF), whereby VIF > 3 indicated highly collinear variables [18]. A hierarchical mixed-effect logistic regression model was used due to the nesting structure and multistage sampling design of the KDHS 2014 [47]. County was included as a random effect to account for region-specific contextual factors (e.g. health financing). Bayesian Information Criteria (BIC) was used to assess the fit of the models using forward variable selection. The models were implemented using "lme4" package [48] in R software (version 3·5·2) and StataCorp. 2014 (Stata Statistical Software: Release 14. College Station, TX: StataCorp LP). Modelling sub-county coverage using small area estimation Additional file 2 summarises the analytical processes used to estimate the coverage of ANC4 and its determinants at the sub-county using SAE techniques to smooth both the coverage of ANC4 and significant determinants of ANC utilisation. Individual data on ANC utilisation from KDHS 2014 were collapsed to either 0 (< 4 ANC visits) or 1 (≥ 4 ANC visits). Using Global Positioning System (GPS) cluster coordinates; the individual data was assigned to the respective sub-counties through a spatial join in ArcMap 10.5 (ESRI Inc., Redlands, CA, USA). The weighted number of women who had at least four ANC visits was then computed in StataCorp. 2014 [Stata Statistical Software: Release 14. College Station, TX: StataCorp LP] at sub-county adjusting for the survey sampling design and applying survey weights. A binomial formulation with a logit link function was implemented with a spatial structured random effect (μi) to account for unmeasured spatial risk factors for ANC use and unstructured random effect (νi) to account for area-specific characteristics (Eq. 1). Spatial smoothing of ANC4 and covariates $$ \mathrm{Log}\left\{\frac{p(i)}{1-p(i)}\right\}=\alpha +{\mu}_i+{\nu}_i $$ The spatial dependence (v) was represented through a neighbourhood matrix that defined a set of adjacent neighbours for each sub-county (i) and modelled through a conditional autoregressive (CAR) process. In this formulation, the parameters in one sub-county were influenced by the average of the neighbouring sub-counties. The Besag-York -Molliè (BYM) 2 CAR model [49] was used as it better accounts for identifiability and scaling. Other formulations [50, 51] did not perform any better when tested during model formulation and evaluation. Two sub-counties were defined as neighbours if they shared either a boundary or a node (queen adjacency) because each sub-county had at least an identified neighbour in this definition as opposed to distance-based and rook adjacency (neighbours based on boundary only). Covariates were not used to assist in modelling ANC4 coverage at the sub-county level to avoid the likelihood of creating a covariate driven metric as opposed to data-driven utilisation rates despite their ability to lower the standard errors [52]. The observed ANC4 utilisation rates were regarded as the result of all possible socioeconomic, demographic and environmental factors that influence ANC trends. Besides, the census of all covariates that would influence ANC4 are neither available, nor are they error-free (unbiased). Thus, the SAE models relied fully on the ANC4 empirical data for the generation of coverage maps. Similar model formulations have been applied elsewhere [12, 14]. The areal level models were run in R software (version 3·5·2) using the R-INLA package. The posterior estimates of ANC4 coverage were then mapped at sub-county level in ArcGIS 10.5 (ESRI Inc., Redlands, CA, USA). Model predictive performance was assessed through cross-validation using a 10% randomly selected hold-out sample and the correlation, root-mean-square-error and the bias computed. The interpretation of the statistics is relative with lower values of root-mean-square error indicating a better fit; a higher correlation suggests an association between the observed and predicted model values and hence preferred [53]. The coverage of the significant variables at sub-county was estimated and mapped using the same framework. This study used secondary data only, which is publicly available to registered users from online data repositories. The procedures and questionnaires for DHS surveys have been reviewed and approved by the ICF International Institutional Review Board (IRB). The ICF International IRB ensures that the survey complies with the U.S. Department of Health and Human Services regulations for the protection of human subjects (45 CFR 46). Participants characteristics A total of 14,858 women aged between 15 and 49 years had at least one pregnancy each in the five years preceding the KDHS 2014 survey and in theory expected to attend the recommended number of ANC visits during the pregnancy period. Weighted estimates show that 96% of women had at least one ANC visit and 58% had at least four ANC visits. More than 90% of the women in our sample had at least primary school education (90.4%). In contrast, approximately 3 in 5 women (60.4%) came from a household of higher socioeconomic status based on the household wealth index (Table 1). Most women (61.4%) were residing in rural areas in 2014, with a majority being married (81.5%). Almost two-thirds (66.1%) of the deliveries occurred at a health facility with most births occurring at a public health facility (49%). Thirty-five per cent of the women were married before their 18th birthday while 9.6% had at least seven children. Table 1 provides a summary of the characteristics of the study participants. Table 1 Socioeconomic and demographic characteristics of women aged 15–49 who had a live birth in the five years preceding the 2014 Kenya Demographic and Health Survey (n = 14,858) and the factors associated with antenatal care utilisation for at least four visits from a bivariate model in Kenya Subnational coverage of at least four ANC visits National modelled estimates show that 95.6% [95% CI: 95.2–95.9] of the pregnant women had attended at least one ANC visit in 2014 and three in five had participated in at least 4 ANC visits (57.7% [95% CI: 56.9–58.5]). ANC coverage estimates were computed for all 295 sub-counties. The spatial model had a root mean square error of 13.0, a mean absolute error of 7.2 and a correlation coefficient of 0.813 between the observed and smoothed values. The estimates reveal significant cross-country heterogeneities at the sub-county level (Fig. 2) ranging from 16.9% [95% CI: 9.7–26.8] in Mandera West sub-county to 77.7% [95% CI: 62.5–88.6] in Ruiru sub-county (Fig. 2; Additional file 3). Map showing the coverage of at least 4 ANC visits at sub-county level based on the 2014 Kenya, Demographic and Health Survey. The coverage is classified in four classes ranging from < 35% (red), 35- < 50% (brown), 50–65% (light green) to > 65% (dark green). Source: author generated map Sixty-nine percent of the sub-counties (204/295) had a mean coverage of at least 50% of women attending at least four antenatal care visits. These sub-counties were mostly in Central and the South-eastern part of Kenya along the Indian Ocean and some parts of western Kenya along Lake Victoria. Twenty sub-counties had ANC4 attendance of over 70%. They included Kibra, Makadara, Mathare, Roysambu, Dagoretti North (Nairobi county), Nakuru Town West, Naivasha (Nakuru county), Mathioya, Kiharu (Murang' a county), Changamwe (Mombasa county), Kajiado East, Kajiado North (Kajiado county), Kikuyu, Ruiru (Kiambu county), Muhoroni (Kisumu county), Matungulu (Machakos county), Kibwezi West, Makueni (Makueni county), Msambweni (Kwale county) and Rabai in Kilifi county (Fig. 2). Among the 20 sub-counties, only four sub-counties (Ruiru, Rabai, Makadara and Nakuru Town West) had coverage of over 75% (Fig. 2 and Additional File 3). Geographically, sub-counties in the Northern and North-Eastern regions had the lowest utilisation of ANC4. A total of 18 sub-counties (6.1%) had a coverage of less than 35% namely Tiaty (Baringo county), Chepalungu (Bomet county), Marakwet East (Elgeyo Marakwet county), Ijara (Garissa county), Malava (Kakamega county), Banissa, Lafey, Mandera North, Mandera South, Mandera West (Mandera county), North Horr (Marsabit county), Saboti (Trans Nzoia county), Wajir North, Wajir South (Wajir county), and North Pokot, Pokot Central, Pokot South, West Pokot, all in West Pokot county (Fig. 2 and Additional File 3). Determinants of ANC4 utilisation and their variation sub nationally Table 1 shows the results of bivariate logistic regression analysis based on 13 candidate variables. Based on the p-value, all the factors except sex of the household head were found to have a significant bivariate relationship with ANC coverage and were included in the multivariate analysis (Table 2). The parsimonious model based on BIC had six variables: age at first marriage, place of delivery, maternal education, birth order, marital status and household wealth (Table 2). Table 2 Hierarchical mixed-effects logistic regression model odds ratios of at least four ANC visit among women in the reproductive age (15–49 years) who had at least a live birth, 5 years preceding the 2014 Kenya Demographic and Health Survey Probability of ANC4 utilisation increased across wealth quintiles; the odds of ANC4 utilisation were two times higher in the least poor quintile (wealthier) compared with the poorest wealth quintile [OR = 2.05; 95% CI 1.60–2.65; P = < 0.0001]. Not delivering in a health facility was associated with lower odds of ANC4 utilisation 0.54 [95% CI 0.48–0 .61; P = < 0.0001]. Lower levels of maternal education were associated with lower rates of ANC4 utilisation (Table 2). Women who got married after 18 years were more likely to utilise ANC4, but this effect was not significant [OR = 1.07; 95% CI 0.97–1.18; P = 0.199] while women who were married were more likely to utilise ANC4. Finally, women with children of a higher birth order (fifth or higher) were less likely to utilise ANC4 [OR = 0.87; 95% CI 0.78–0.99; P = 0.027] (Table 2). Figure 3 shows the geographic variation of six determinants associated with ANC4 use in the parsimonious model in Kenya by sub-county. The spatial variation in maternal education mirrored that of the ANC4 attendance where sub-counties in central and western Kenya had higher proportions of mothers with at least secondary school education and higher coverage of ANC4 visits. Women with tertiary education were three times more likely to utilise ANC4 compared to those without any education (Fig. 3 and Table 2). Map showing the coverage of determinants associated with the utilisation of at least 4 ANC visits at sub-county level based on the 2014 Kenya, Demographic and Health Survey from the parsimonious model. The dark lines represent the counties. Source: author generated map Across sub-counties, lower coverage of health facility deliveries (less than 25%) was more common in the northern, eastern and south-east areas of Kenya. In the same regions ANC4 utilisation rates were less than 50%. Similar relationships and observations were made for all the other determinants (age at first marriage, birth order and household wealth) except for marital status. For example, across sub-counties where socioeconomic status was low (> 75% of the households in the poor and poorest wealth quintiles), ANC utilisation rates were low (< 50%) (Fig. 3). Improving ANC coverage across all countries is a collective priority for the global health community. Maternal mortality remains an unconscionable burden hence ensuring that maternal services reach all women equitably, including those in the poorest and most disadvantaged communities, remains a critical goal. In Kenya there has been considerable progress made towards improving ANC coverage, yet significant differences persist between sub-regions coinciding with variations in social demographic factors. ANC4 utilisation rates are heterogeneous with sub-counties in northern and eastern Kenya incredibly marginalised compared to those around central Kenya. For example, pregnant women in Central Kenya were almost five times more likely to attend the recommended four ANC visits compared to those in northern and eastern Kenya. Regions that were disadvantaged with respect to access to ANC services bore several other disadvantages; hence geography is a critical determinant of health inequities. These areas have a higher proportion of households classified as poor, in addition to having a higher percentage of uneducated women compared to the rest of the country. Increased education levels are associated with greater use of health services, financial advantages, and greater autonomy [54,55,56,57,58]. Finally, these areas also had the least number of health facility deliveries highly correlated with ANC4 coverage likely due to poor road infrastructure in these areas linked to poorer metrics of geographic health access [22,23,24, 59,60,61]. ANC4 utilisation was significantly associated with one's socioeconomic status, where women from households with high socioeconomic status were more likely to utilise a minimum of four ANC services. Socioeconomic status is strongly correlated with education where educated mothers are more aware of their health and the development of their families and have greater autonomy in deciding to use health services [62]. Women from higher social-economic groupings are also more likely to afford to seek care hence a strong predictor of higher ANC4 utilisation even in a context like Kenya where maternal services are free or highly subsidised [30,31,32,33,34]. The government of Kenya and other stakeholders have over the years introduced programs to improve uptake of maternal health and reduce disparities and inequities across Kenya. In June 2013, the government abolished fees payable by mothers seeking care in public health facilities, which increased health facility deliveries from 44% in 2012/13 to 62% in 2014 [34]. Under this programme (Linda mama), a pregnant woman is entitled to ANC, delivery, post-natal care (PNC), emergency referrals and care for infants up to one year [63]. Before the implementation of this programme, the government had a reproductive health voucher programme that was implemented between 2006 and 2016 [30,31,32]. The vouchers were sold at a highly subsidised price and catered for ANC, facility delivery and PNC and were specifically targeted to poor women and were associated with an increase in facility deliveries [31]. However, these subsidies didn't appear to increase ANC coverage [31]. They resulted in a modest increase in the facility delivery and greater use of private sector for all services, further highlighting the need for interventions that are a better fit to solve the factors influencing low ANC utilisation. The odds of having at least four ANC visits during pregnancy was significantly lower among women who were not married. Studies have shown that both economic status and dynamics regarding the distribution of power influence the use of maternal health services [64]. High birth order was also associated with a lower likelihood of utilising ANC4. There are a combination of factors likely at play here: one is the lack of time given other childcare responsibilities [65] two, is the belief among these mothers regarding their knowledge of the risks associated with pregnancy given their prior history with other pregnancies [66]. Findings such as these can guide local community-based initiatives aimed at increasing the utilisation of ANC services. The Beyond Zero initiative launched in 2014 was aimed at complementing government programs to reduce maternal, newborn and children deaths. It focuses on promoting access to quality maternal and neonatal healthcare services and having certified centres of excellence for maternal and child health care within each county, among other priorities [67]. In addition to this, the government has set aside initiatives to improve maternal and overall health by introducing the last mile project that focusses on the establishment of health facilities to reduce travel time and influence the utilisation of interventions. Women are acutely affected by the physical and time barriers to accessing health services; however, in this study, travel time was only significant in the univariate model and its inclusion in the multivariate analysis did not improve the model fit. Kenya has a substantially high number of health facilities [21,22,23,24]. Over 98% of women who had at least one pregnancy in the five years preceding the KDHS 2014 survey lived within 30 min of the nearest health facility. Initiatives that involve the use of Community Health Workers (CHWs) are pivotal in the improvement of access to care and addressing the human resource challenges [68]. There is adequate evidence to show that CHWs have robustly improved health outcomes [68] hence the renewed attention for the need to strengthen CHWs performance. Such initiatives need subnational data to inform better targeting at levels below the county. Specifically, in marginalised sub-counties, where populations can be highly mobile, alternative, complementary approaches to existing mechanisms should be explored. Identifying sub-counties where ANC utilisation rates remain lower and factors associated with observed patterns will allow county governments to direct suitable interventions and actions [12] to promote ANC attendance. The realisation of targets to reduce maternal mortality requires robust progress monitoring to underpin plans for improvement in health service and to identify disadvantaged groups focused on prioritising those with the greatest need. Most Government policy directives tend to be broad and frequently focus on a subset of local governing units, often failing to identify strategies that can overcome the social barriers faced by disadvantaged communities. The utilisation of insights from the existing data should be impactful in the policy formulation process and the allocation of resources to address the disparities in ANC intervention uptake. To further improve attitude and perception towards ANC, preventive and promotional health education campaigns needs to be carried out to enhance maternal health utilisation. Challenges involving adequacy of infrastructure, human resource availability and other aspects of health services provisions such as quality of care should be addressed to improve use. Besides, local governments need to utilise opportunities to leverage other non-health pro-equity interventions to increase coverage. Despite the strengths of the study, there are several caveats attached to this analysis. The survey included experiences of mothers with a live birth five years preceding a survey leaving out mothers with other birth outcomes or those who might have died during pregnancy or delivery resulting in selection bias. Due to the retrospective nature of the collected data, there is a risk of recall bias which might potentially lead to inaccurate results [69]. The study was limited to the socioeconomic and demographic factors collected during the household surveys leaving out factors like availability, cost of care and skilled health workers. The displacement of cluster coordinates for confidentiality was not taken into account. Thus, a small proportion of clusters near the boundary edges may have been misclassified. However, the use of SAE models to smooth the estimates across adjacent units would potentially abate this effect. Household sample surveys provide an opportunity to monitor the coverage and trends of most health indicators at the community level. However, these surveys are conducted every three to five years limiting tracking of trends at a higher temporal granularity. An alternative source of information is the Kenya health management Information system (HMIS) based on the District health information system Version 2 (DHIS2) which also offers information to monitor ANC trends. DHIS2 has been used to track trends and compare against those reported in the household sample surveys and is promising [70,71,72,73]. However, its use is limited due to poor reporting rates [22] and challenges in determining accurate catchment populations (population in need of service) [74]. In conclusion, the ANC4 utilisation rates remain suboptimal and show substantial subnational variability. The areas with low ANC4 utilisation rates corresponded to areas of low socioeconomic status, fewer educated women and a lower number of health facility deliveries. Improvements in maternal health cannot be realised without fundamental changes in education, household wealth status, employment, and empowerment. There is need to recognise the importance of these social determinants of health as a critical driving force behind the country's challenges with reaching targets in the health agenda related to maternal health, hence the government and stakeholders need to direct complementary measures that address social inequities. The full database of sample household survey (Kenya Demographic and Health Survey 2014) that supports the findings of this study is available open access from DHS program data portal- http://dhsprogram.com/data/available-datasets.cfm [7] available to registered users. The travel time surfaces are open access at https://doi.org/10.6084/m9.figshare.7160363 linked to work on national and sub-national variation in patterns of febrile case management in sub-Saharan Africa [37]. ANC: Antenatal care ANC1: Proportion of women aged 15–49 years who received ANC services provided by a skilled birth attendant (doctor, nurse or midwife) at least once during pregnancy Proportion of women aged 15–49 years who received ANC services provided by a skilled birth attendant at least four times during pregnancy BYM: Besag-York -Molliè CHWs: KDHS2014: The 2014 Kenya Demographic and Health Survey DHIS2: District health information system version 2 HMIS: Kenya health management Information system MDGs: SSA: SDGs: Blencowe H, Cousens S, Jassir FB, Say L, Chou D, Mathers C, et al. National, regional, and worldwide estimates of stillbirth rates in 2015, with trends from 2000: a systematic analysis. Lancet Glob Heal. 2016;4:e98–108. WHO, UNICEF, UNFPA, World Bank Group, UNPD. Trends in maternal mortality: 1990 to 2015 [Internet]. 2015 [cited 2019 May 25]. Available from: https://apps.who.int/iris/bitstream/handle/10665/194254/9789241565141_eng.pdf?sequence=1&isAllowed=y. PMNCH. Opportunities for Africa's newborns: Practical data, policy and programmatic support for newborn care in Africa [Internet]. 2006 [cited 2019 Feb 25]. Available from: http://www.who.int/pmnch/media/publications/africanewborns/en/. WHO. Integrated Management of Pregnancy and Childbirth. WHO Recommended Interventions for Improving Maternal and Newborn Health [Internet]. Geneva, Switzerland; 2009 [cited 2016 Sep 8]. p. 1–6. Available from: http://apps.who.int/iris/bitstream/10665/69509/1/WHO_MPS_07.05_eng.pdf. WHO. WHO recommendations on antenatal care for a positive pregnancy experience [Internet]. 2016 [cited 2019 Apr 22]. p. 1–72. Available from: https://dl140.zlibcdn.com/download/article/17712983?token=58b55c86c54c84616cc8f4e37385de25. Villar J, Ba'aqeel H, Piaggio G, Lumbiganon P, Miguel Belizán J, Farnot U, et al. WHO antenatal care randomised trial for the evaluation of a new model of routine antenatal care. Lancet. 2001;357:1551–64. ICF. The DHS Program :Available Datasets [Internet]. 2016 [cited 2016 Apr 21]. Available from: http://dhsprogram.com/data/available-datasets.cfm. UNICEF. Antenatal Care [Internet]. 2018 [cited 2018 May 25]. Available from: https://data.unicef.org/topic/maternal-health/antenatal-care/#. Bangha MW, Simelane S. Spatial differentials in childhood mortality in South Africa: evidence from the 2001 census. Etude la Popul Africaine. 2007;22:3–21. Marmot M, Bell R. The sustainable development goals and health equity. Epidemiology. 2017;29:5–7. Stuart E, Woodroffe J. Leaving no-one behind: can the sustainable development goals succeed where the millennium development goals lacked? Gend Dev. 2016;24:69–81. Macharia PM, Giorgi E, Thuranira PN, Joseph NK, Sartorius B, Snow RW, et al. Sub national variation and inequalities in under-five mortality in Kenya since 1965. BMC Public Health. 2019;19:146. PubMed PubMed Central Google Scholar Ntirampeba D, Neema I, Kazembe L. Modelling spatio-temporal patterns of disease for spatially misaligned data: an application on measles incidence data in Namibia from 2005-2014. PLoS One. 2018;13:e0201700. Macharia PM, Giorgi E, Noor AM, Waqo E, Kiptui R, Okiro EA, et al. Spatio-temporal analysis of plasmodium falciparum prevalence to understand the past and chart the future of malaria control in Kenya. Malar J. 2018;17:340. Ouma PO, Maina J, Thuranira PN, Macharia PM, Alegana VA, English M, et al. Access to emergency hospital care provided by the public sector in sub-Saharan Africa in 2015: a geocoded inventory and spatial analysis. Lancet Glob Heal. 2018;6:e342–50. Utazi CE, Thorley J, Alegana VA, Ferrari MJ, Takahashi S, Metcalf CJE, et al. Mapping vaccination coverage to explore the effects of delivery mechanisms and inform vaccination strategies. Nat Commun. 2019;10:1633. Doku DT, Neupane S. Survival analysis of the association between antenatal care attendance and neonatal mortality in 57 low- and middle-income countries. Int J Epidemiol. 2017;46:1668–77. Keats EC, Ngugi A, Macharia W, Akseer N, Khaemba EN, Bhatti Z, et al. Progress and priorities for reproductive, maternal, newborn, and child health in Kenya: a countdown to 2015 country case study. Lancet Glob Heal. 2017;5:e782–95. KPMG. Devolution of Healthcare Services in Kenya: Lessons Learnt from Other Countries [Internet]. 2013 [cited 2015 May 22]. Available from: https://home.kpmg/ke/en/home.html. GoK. The Constitution of Kenya, 2010 [Internet]. 2010 [cited 2016 May 23]. Available from: http://kenyalaw.org/kl/index.php?id=398. MoH GoK. Kenya Master Health Facility List [Internet]. 2019 [cited 2019 May 30]. Available from: http://kmhfl.health.go.ke/#/home. Maina JK, Macharia PM, Ouma PO, Snow RW, Okiro EA. Coverage of routine reporting on malaria parasitological testing in Kenya, 2015–2016. Glob Health Action. 2017;10:1413266. Ouma PO, Joseph M, Thuranira Pamela N, Macharia Peter M, Alegana Victor A, Mike E, et al. Access to emergency hospital care provided by the public sector in sub-Saharan Africa in 2015: a geocoded inventory and spatial analysis. Lancet Glob Heal. 2018:2214–109. Maina J, Ouma PO, Macharia PM, Alegana VA, Mitto B, Fall IS, et al. A spatial database of health facilities managed by the public health sector in sub Saharan Africa. Sci Data. 2019;6:134. Chuma J, Okungu V. Viewing the Kenyan health system through an equity lens: implications for universal coverage. Int J Equity Health. 2011;10:1–14. Mwabu G. Health care reform in Kenya: a review of the process. Health Policy (New York). 1995;32:245–55. Mwabu G, Mwanzia J, Liambila W. User charges in government health facilities in Kenya: effect on attendance and revenue. Health Policy Plan. 1995;10:164–70. Mwabu GM. Health care decisions at the household level: results of a rural health survey in Kenya. Soc Sci Med. 1986;22:315–9. Collins D, Quick J, Musau S, Kraushaa D, Hussein I. The rise and fall of cost sharing in Kenya: the impact of faced implementation. Health Policy Plan. 1996;11:52–63. Abuya T, Njuki R, Warren CE, Okal J, Obare F, Kanya L, et al. A policy analysis of the implementation of a reproductive health vouchers program in Kenya. BMC Public Health. 2012;12:1 Available from: BMC Public Health. Dennis ML, Abuya T, Maeve O, Campbell R, Benova L, Baschieri A, et al. Evaluating the impact of a maternal health voucher programme on service use before and after the introduction of free maternity services in Kenya : a quasi-experimental study. BMJ Glob Heal. 2018;3:e000726. Dennis ML, Benova L, Abuya T, Quartagno M, Bellows B, Campbell OMR. Initiation and continuity of maternal healthcare: examining the role of vouchers and user-fee removal on maternal health service use in Kenya. Health Policy Plan. 2019;34:120–31. Afulani, et al. Quality of antenatal care and associated factors in a rural county in Kenya : an assessment of service provision and experience dimensions. BMC Health Serv Res. 2019;4:1–16. MoH/GoK. Linda Mama Boresha Jamii: Implementataion manual for programme managers [Internet]. 2016. Available from: http://www.health.go.ke/wp-content/uploads/2018/11/implementation-manual-softy-copy-sample-1.pdf. [cited 2019 May 30]. Barasa E, Nguhiu P, McIntyre D. Measuring progress towards Sustainable Development Goal 3.8 on universal health coverage in Kenya. BMJ Glob Heal. 2018;3:e000904 Available from: http://gh.bmj.com/lookup/doi/10.1136/bmjgh-2018-000904. Keats EC, Macharia W, Singh NS, Akseer N, Ravishankar N, Ngugi AK, et al. Accelerating Kenya's progress to 2030: understanding the determinants of under-five mortality from 1990 to 2015. BMJ Glob Heal. 2018;3:e000655. Alegana VA, Maina J, Ouma PO, Macharia PM, Wright J, Atkinson PM, et al. National and sub-national variation in patterns of febrile case management in sub-Saharan Africa. Nat Commun. 2018;9:4994. Macharia PM, Odera PA, Snow RW, Noor AM. Spatial models for the rational allocation of routinely distributed bed nets to public health facilities in Western Kenya. Malar J. 2017;16:367. Warren JL, Perez-Heydrich C, Burgert CR, Emch ME. Influence of demographic and health survey point displacements on raster-based analyses. Spat Demogr. 2016;4:135–53. PubMed Google Scholar Burgert CR, Colston J, Roy T, Zachary B. Geographic displacement procedure and georeferenced data release policy for the Demographic and Health Surveys [Internet]. DHS Spat. Anal. Reports No. 7. 2013. Report No.: 7. Available from: http://dhsprogram.com/pubs/pdf/SAR7/SAR7.pdf. [cited 2019 May 30]. Yeneneh A, Alemu K, Dadi AF, Alamirrew A. Spatial distribution of antenatal care utilization and associated factors in Ethiopia: evidence from Ethiopian demographic health surveys. BMC pregnancy childbirth. BMC Pregnancy Childbirth. 2018;18:1–12. Yaya S, Bishwajit G, Ekholuenetale M, Shah V, Kadio B, Udenigwe O. Timing and adequate attendance of antenatal care visits among women in Ethiopia. PLoS One. 2017;12:e0184934. Chama-Chiliba CM, Koch SF. Utilization of focused antenatal care in Zambia: examining individual- and community-level factors using a multilevel analysis. Health Policy Plan. 2015;30:78–87. Gupta S, Yamada G, Mpembeni R, Frumence G, Callaghan-Koru JA, Stevenson R, et al. Factors associated with four or more antenatal care visits and its decline among pregnant women in Tanzania between 1999 and 2010. PLoS One. 2014;9:e101893. Magadi MA, Madise NJ, Rodrigues RN. Frequency and timing of antenatal care in Kenya: explaining the variations between women of different communities. Soc Sci Med. 2000;51:551–61. Okedo-Alex IN, Akamike IC, Ezeanosike OB, Uneke CJ. Determinants of antenatal care utilisation in sub-Saharan Africa: a systematic review. BMJ Open. 2019;9:e031890. Ruktanonchai CW, Ruktanonchai NW, Nove A, Lopes S, Pezzulo C, Bosco C, et al. Equality in maternal and newborn health: Modelling geographic disparities in utilisation of Care in Five East African Countries. PLoS One. 2016;11:e0162006. Bates D, Machler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67. Riebler A, Sørbye SH, Simpson D, Rue H. An intuitive Bayesian spatial model for disease mapping that accounts for scaling. Stat Methods Med Res. 2015;25:1145–65. Anderson C, Ryan LM. A comparison of spatio-temporal disease mapping approaches including an application to ischaemic heart disease in New South Wales, Australia. Int J Environ Res Public Health. 2017;14.146. Aregay M, Lawson AB, Faes C, Kirby RS, Carroll R, Watjou K. Comparing multilevel and multiscale convolution models for small area aggregated health data. Spat Spatiotemporal Epidemiol. 2017;22:39–49. Okiro EA. Estimates of subnational health trends in Kenya. Lancet Glob Heal. 2019;7:e8–9. Raschka S. Model Evaluation , Model Selection, and Algorithm Selection in Machine Learning [ cs . LG ] 3 Dec 2018. Univ Wisconsin-Madison. 2018;1:1–49. Gakidou E, Cowling K, Lozano R, Murray CJ. Increased educational attainment and its effect on child mortality in 175 countries between 1970 and 2009: a systematic analysis. Lancet. 2010;376:959–74. Das GM. Death Clustering , Mothers' Education and the Determinants of Child Mortality in Rural Punjab , India. Popul Stud (NY). 2010;44:37–41. Byhoff E, Hamati MC, Power R, Burgard SA, Chopra V. Increasing educational attainment and mortality reduction: a systematic review and taxonomy. BMC Public Health. 2017;17:719. Cadwell J. Mortality decline an examination of Nigerian data. Popul Stud (NY). 1979;33:395–413. Cleland JG, van Ginneken JK. Maternal education and child survival in developing countries: the search for pathways of influence. Soc Sci Med. 1988;27:1357–68. Frings M, Lakes T, Müller D, Khan MMH, Epprecht M, Kipruto S, et al. Modeling and mapping the burden of disease in Kenya. Sci Rep. 2018;8:1–9. Opiyo F, Wasonga O, Nyangito M, Schilling J. Drought adaptation and coping strategies among the Turkana pastoralists of northern Kenya. Int J Disaster Risk Sci. 2015;6:295–309. Noor AM, Alegana VA, Gething PW, Snow RW. A spatial national health facility database for public health sector planning in Kenya in 2008. Int J Health Geogr. 2009;8:13. Thomson S. Achievement at school and socioeconomic background — an educational perspective. Npj Sci Learn. 2018;2:1–3. MoH/GoK. Linda Mama Boresha Jamii: Implementataion manual for programme managers [Internet]. 2016 [cited 2019 Jul 22]. Available from: http://www.health.go.ke/wp-content/uploads/2018/11/implementation-manual-softy-copy-sample-1.pdf. Rosário EVN, Gomes MC, Brito M, Costa D. Determinants of maternal health care and birth outcome in the Dande health and demographic surveillance system area, Angola. PLoS One. 2019;14:e0221280. Muchie KF. Quality of antenatal care services and completion of four or more antenatal care visits in Ethiopia : a finding based on a demographic and health survey. BMC Pregnancy Childbirth. 2017;17:1–7. Monica, et al. The Determinants of Delivery Care in Kenya. Soc Biol. 2000;47:164–88. Beyond Zero. Beyond Zero Intiative [Internet]. 2018 [cited 2018 Feb 22]. Available from: https://www.beyondzero.or.ke/about-us/. WHO. Community health workers :What do we know about them? The state of the evidence on programmes, activities, costs and impact on health outcomes of using community health workers [Internet]. 2007 [cited 2019 Sep 25]. Available from: https://www.who.int/hrh/documents/community_health_workers.pdf. Ngandu NK, Manda S, Besada D, Rohde S, Oliphant NP, Doherty T. Does adjusting for recall in trend analysis affect coverage estimates for maternal and child health indicators? An analysis of DHS and MICS survey data. Glob Health Action. 2016;9:32408. Githinji S, Oyando R, Malinga J, Ejersa W, Soti D, Rono J, et al. Completeness of malaria indicator data reporting via the district health information software 2 in Kenya, 2011–2015. Malar J. 2017;16:344. Karuri J, Waiganjo P, Orwa D, Manya A. DHIS2: the tool to improve health data demand and use in Kenya. J Health Inform Dev Ctries. 2014;8:38–60. Maina I, Wanjala P, Soti D, Kipruto H, Boerma T. Using health-facility data to assess subnational coverage of maternal and child health indicators , Kenya. Bull World Health Organ. 2017;95:683–94. Alegana VA, Okiro EA, Snow RW. Routine data for malaria morbidity estimation in Africa: challenges and prospects. BMC Med. 2020;18:121. Alegana VA, Khazenzi C, Akech SO, Snow RW. Estimating hospital catchments from in-patient admission records : a spatial statistical approach applied to malaria. Sci Rep. 2020;10:1324. Funding was provided to EAO as part of her Wellcome Trust Intermediate Fellowship (number 201866); KGW, NKJ, PMM, and EAO, acknowledge the support of the Wellcome Trust to the Kenya Major Overseas Programme (number 203077); PMM and KGW acknowledges support for their PhD and PgDip respectively through the DELTAS Africa Initiative [DEL-15-003]. The DELTAS Africa Initiative is an independent funding scheme of the African Academy of Sciences (AAS)'s Alliance for Accelerating Excellence in Science in Africa (AESA) and supported by the New Partnership for Africa's Development Planning and Coordinating Agency (NEPAD Agency) with funding from the Wellcome Trust [number 107769/Z/10/Z] and the UK government. Additional support provided by Wellcome Trust Principal fellowship to Professor Robert W Snow (numbers 103602 and 212176). The views expressed in this publication are those of the authors and not necessarily those of AAS, NEPAD Agency, Wellcome Trust or the UK government. The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. Population Health Unit, Kenya Medical Research Institute-Wellcome Trust Research Programme, Nairobi, Kenya Kefa G. Wairoto, Noel K. Joseph, Peter M. Macharia & Emelda A. Okiro Centre for Tropical Medicine and Global Health, Nuffield Department of Clinical Medicine, University of Oxford, Oxford, OX3 7LJ, UK Emelda A. Okiro Kefa G. Wairoto Noel K. Joseph Peter M. Macharia KGW undertook the data assembly, data checking, analysis and writing of the first draft of the manuscript. PMM provided support in model development and analysis and contributed to the first draft of the manuscript. NKJ and PMM contributed to data assembly, data checking, interpretation and revision of the manuscript drafts. PMM and EAO conceived the project, provided overall management, interpretation of results and contributed to second drafts of the manuscript. All authors reviewed the final analysis, have access to the data and approved the final manuscript. All authors read and met ICMJE criteria for authorship. Correspondence to Peter M. Macharia. This is a retrospective study of secondary data (Kenya and Demographic Health Survey 2014-KDHS 2014) that are publicly available. The procedures and questionnaires for DHS surveys have been reviewed and approved by the ICF International Institutional Review Board. Not applicable. The manuscript does not contain any individual person's data. List of Counties (bold) and their respective sub-county (numbered) as presented in Fig. 1 of the main manuscript. The analytical process used to estimate the coverage ANC4 and its significant determinants at sub-county level using the 2014 Kenya Demographic and Health Survey. The datasets and outputs are shown in green while processes are shown in orange. The mean coverage of ANC4 in 2014 for each of the 295 sub-counties of Kenya. Wairoto, K.G., Joseph, N.K., Macharia, P.M. et al. Determinants of subnational disparities in antenatal care utilisation: a spatial analysis of demographic and health survey data in Kenya. BMC Health Serv Res 20, 665 (2020). https://doi.org/10.1186/s12913-020-05531-9 Spatial variation Sub-national
CommonCrawl
Reading: Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala Nora Lustig Tulane University, US Nora Lustig is Samuel Z. Stone Professor of Latin American Economics and director of the Commitment to Equity Institute, Tulane University, and nonresident fellow at the Center for Global Development and the Inter-American Dialogue. Afro-descendants and indigenous peoples in Latin America face higher poverty rates and are disproportionately represented among the poor. The probability of being poor is between two and three times higher for indigenous and Afro-descendants than whites. Using comparable fiscal incidence analyses for Bolivia, Brazil, and Guatemala, I analyze how much poverty and inequality change in the ethnoracial space after fiscal interventions. Although taxes and transfers tend to reduce the ethnoracial gaps, the change is very small. While per capita cash transfers tend to be higher for the nonwhite population, spending on these programs is too low, especially when compared with the disproportionate number of poor people among nonwhites. How to Cite: Lustig, N. (2017). Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala. Latin American Research Review, 52(2), 208–220. DOI: http://doi.org/10.25222/larr.90 Published on 16 Aug 2017 Accepted on 08 Nov 2016 Submitted on 01 Mar 2016 Ethnic and racial differences in human capital and earnings are one of the key determinants of inequality in Latin America (De Ferranti et al. 2004; Hall and Patrinos 2006; Ñopo 2012). Although the specific social dynamics faced by Afro-descendants and indigenous peoples differ, both of these populations have systematically higher poverty rates and are disproportionately represented among the poor.1 Fiscal redistribution is one of the policy levers the state has to address ethnoracial inequalities. As shown in a series of existing studies that apply the common methodological framework developed at the Commitment to Equity Institute (Tulane University), fiscal policy unambiguously reduces income inequality, albeit to different degrees.2 Does fiscal policy also reduce income inequality between ethnic and racial groups?3 This article analyzes the effects of fiscal policy on inequality along ethnic and racial lines for Bolivia, Guatemala, and Brazil in 2009, the year for which comparable analysis for these three countries is available.4 These three countries were selected because they have large Afro-descendant (Brazil) and indigenous (Bolivia and Guatemala) populations, and welfare gaps between whites and nonwhites are quite high. Using the self-identification method to classify individuals,5 the indigenous populations in Bolivia and Guatemala represent 54.2 and 40.7 percent of the total populations, respectively, the largest shares in the region. In the last census round in Brazil, 50.8 percent of the national population self-identified as Afro-descendant, the largest both in absolute and relative terms in Latin America.6 A salient indicator of ethnoracial inequality is the extent to which the indigenous and Afro-descendant populations are disproportionately underrepresented among the higher income groups and disproportionately overrepresented among the poor.7 If whites and nonwhites were similarly affected by social circumstances and market forces, one would expect the share of the nonwhite population at every income stratum to be roughly equal to its share in the total population. However, this is far from true. In Bolivia, Brazil, and Guatemala, the share of the nonwhite population is roughly 54.2, 50.8, and 40.7 percent, respectively. And, yet, before fiscal redistribution, nonwhites represent 71.7 percent in Bolivia, 74.2 percent in Brazil, and 60.8 percent in Guatemala. The overrepresentation of nonwhites among the poor is just the other side of the coin of how poverty affects the different ethnoracial groups. The indigenous and Afro-descendant peoples face a considerably higher probability of being poor than does the white population. The probability of being poor is measured by the head count ratio.8 In Bolivia and Guatemala, the head count ratio (with the national poverty line) is 14.7 and 20.6 percent, respectively, for the nonindigenous population, but 31.5 and 46.6 percent, respectively, for the indigenous population. In Brazil, the head count ratio is 5.2 percent for the white population but 14.6 percent for Afro-descendants. To what extent does fiscal policy reduce the gap in the probability of being poor for different ethnic and racial groups? I will answer this question using fiscal incidence analysis, which essentially allows us to trace the changes in people's incomes from incomes before taxes and transfers to incomes after taxes and transfers. The specific components of fiscal policy looked at here are direct taxes and transfers (such as personal income tax and cash transfers), indirect taxes (such as value-added taxes or VAT and excise taxes), and subsidies (such as energy and food price subsidies). The article draws from the following Commitment to Equity Institute studies: Bolivia: Paz Arauco et al. (2013, 2014); Brazil: Higgins and Pereira (2013 and 2014) and Pereira (2017); and Guatemala: Cabrera, Lustig, and Moran (2015).9 The results can be compared across countries because the studies use the same fiscal incidence methodological framework, described in Lustig and Higgins (2013) and Lustig (2017a). The household surveys used for the analyses are, in Bolivia, Encuesta de Hogares, 2009; in Brazil, Pesquisa de Orçamentos Familiares, 2009; and in Guatemala, Encuesta Nacional de Ingresos y Gastos de las Familias, 2009–2010. In the three countries, although fiscal policy reduces ethnoracial gaps, the change is negligibly small and in some instances does not exist. While, as I will show below, cash transfer programs targeted to the poor tend to redistribute more resources to Afro-descendants and indigenous groups, they are too small to make a significant difference in terms of ethnoracial differentials in poverty rates. Fiscal Incidence Analysis: A Brief Overview Fiscal incidence analysis is used to assess the distributional impacts of a country's taxes and transfers.10 Essentially, it consists of allocating taxes and public spending (particularly social spending) to households or individuals so that one can compare incomes before with incomes after net taxes. The fiscal incidence analysis used here—known as the "accounting approach"— is a point-in-time analysis and does not incorporate behavioral or general equilibrium effects. That is, no claim is made that the original or market income equals the true counterfactual income in the absence of taxes and transfers. It is a first-order approximation that measures the average incidence of fiscal interventions. However, the analysis is not a mechanically applied accounting exercise. The incidence of taxes is the economic rather than statutory incidence. Consistent with other conventional tax incidence analyses, here we assume that the economic burden of direct personal income taxes is borne by the recipient of income. The burden of payroll and social security taxes is assumed to fall entirely on workers. It is assumed that individual income taxes and contributions by both employees and employers, for instance, are borne by labor in the formal sector. Individuals who are not contributing to social security are assumed to pay neither direct taxes nor contributions. Consumption taxes are assumed to be shifted forward to consumers. In the case of consumption taxes, the analyses take into account the lower incidence associated with own consumption, rural markets, and informality. These assumptions are strong because, in essence, they imply that labor supply is perfectly inelastic and that consumers have perfectly inelastic demands for goods and services. In practice they provide a reasonable approximation.11 A fiscal incidence study must start by defining the basic income concepts that are used to assess the effects of fiscal policy on people's incomes and their distribution. In this article, there are three basic income concepts. Market income, also called primary or original income, is total current income before direct taxes and transfers.12 It equals the sum of gross (pretax) wages and salaries in the formal and informal sectors (also known as earned income); income from capital (dividends, interest, profits, rents, and so on) in the formal and informal sectors (excluding capital gains and gifts); consumption of own production; imputed rent for owner-occupied housing; and private transfers (remittances, pensions from private schemes, and other private transfers such as alimony). Disposable income is market income minus direct personal income taxes on all income sources (included in market income) that are subject to taxation plus direct government transfers (mainly cash transfers but also including near-cash transfers such as food transfers, free textbooks, and school uniforms). Consumable income is disposable income plus indirect subsidies (such as food and energy price subsidies) minus indirect taxes (such as value-added taxes, excise taxes, and sales taxes). These concepts are summarized in Figure 1. Fiscal redistribution and income concepts. Source: Lustig and Higgins (2013). Taxes, Transfers, and Subsidies In 2009, total tax revenues amounted to 26.9 percent of GDP. In Bolivia, income tax is negligible and was not included in the analysis.13 There are four indirect taxes applied to consumption were included in the analysis: a value-added tax (VAT), a transaction tax, a special tax on hydrocarbons, and a specific consumption tax (excise taxes), all of which account for 41 percent of total tax revenues in 2009 (or 11 percent of GDP). In 2009, the year of the survey used for the fiscal incidence analysis, spending on direct transfers equaled 2.0 percent of GDP. With spending at 1.4 percent of GDP in 2009, the largest direct cash transfer program was the noncontributory pension program Renta Dignidad. Implemented in 2008, the program built on an earlier transfer program created in 1994 (Bono Solidario, known as Bonosol). Beneficiaries are citizens aged sixty or older. In 2009, the program benefited close to 800,000 people. Direct transfers also included two flagship conditional cash transfer programs: Bono Juancito Pinto and Bono Juana Azurduy. The Bono Juancito Pinto was launched to promote primary school attendance. All children between six and nineteen years of age attending public schools are eligible for the program. The transfer consists of a yearly payment equal to 200 bolivianos, approximately $0.18/day (in 2005 purchasing power parity, or PPP) paid once a year, conditional on proven attendance during the school year.14 According to the program's roster, 1.7 million children benefited from the transfer in 2009, with public expenditures equaling 0.3 percent of GDP. The Bono Juana Azurduy was created in 2009 with the purpose of promoting prenatal health, infant checkups, and an increase in the rate of hospital-attended births. Only mothers and children without access to health insurance are eligible. The program consists of a maximum transfer of 1,820 bolivianos over a maximum period of thirty-three months (equivalent to an average of $0.58 PPP/day). According to the program registry, 776,045 women and children benefited from the program in 2009. Public expenditure on the program reached 0.02 percent of GDP. Desayuno Escolar (school breakfast program) provides breakfast to children between the ages of four and nineteen who attend school. The per capita average cost of the program is 9 bolivianos per month, about $0.1 PPP/day. In 2008, the program benefited 1,985,158 people. Resources spent on the program reached 0.2 percent of GDP in 2009. The war veterans' transfer program Beneméritos del Chaco consists of an average monthly payment of 1,254 bolivianos, equivalent to $13.2 PPP/day, paid to veterans of the Chaco War (1932–1935). In 2009, the payment benefited more than one thousand veterans and resources spent on the program amounted to 0.14 percent of GDP. Indirect subsidies include the subsidized fraction of liquid gas and the subsidized fraction of gasoline consumed by households. In 2009 subsidies were equivalent to 0.6 percent of GDP. Total fiscal revenues at the federal, state, and municipal levels were about 35 percent of GDP in 2009, much higher than the Latin American average.15 Direct taxes (personal and corporate) represented 45 percent of the taxes levied by the government, and indirect taxes represented 55 percent. The Brazilian tax system is exceedingly complex. The taxes included in the incidence analysis are direct personal income taxes and consumption taxes. The most important indirect tax is the ICMS, a state tax levied on the sale or physical movement of goods, freight, transportation, communications services, and electricity. ICMS accounted for 21 percent of fiscal revenues in 2009. Other important indirect taxes are the COFINS (federal tax on goods and services to finance the social security deficit), ISS (municipal tax on services), PIS (federal tax on goods and services to finance social services for workers), and IPI (federal tax on industrial products). They correspond to 10.8, 4.1, 2.9, and 2.8 percent of fiscal revenues, respectively. Many indirect taxes operate with their own administering department, which may be at the federal, state, or municipal level, compounding on each other, something known as the "cascading effect." These effects are especially important due to their impact on consumer purchasing power. Exemptions on consumption taxes are almost nonexistent in Brazil and, hence, the effective rates paid on basic food products in Brazil can be especially deleterious for the poor. In 2009, direct transfers included in the incidence analysis were roughly equal to 4 percent of GDP. Special Circumstances Pensions, the largest program in terms of spending (2.3 percent of GDP), are designed to smooth the impact of idiosyncratic shocks or are means tested. They are paid in the case of an accident at work, sickness, or related idiosyncratic shock. Although these pensions are funded by the contributory pension system, they are considered noncontributory because they have low or no requirements in terms of length of contribution period. In 2009, there were about 2.9 million beneficiaries (INSS 2010) and the average benefit per person was $5.22 PPP per day. Bolsa Família, Brazil's flagship conditional cash transfer program, transfers cash to eligible families in exchange for complying with certain conditions. Eligible families are poor families with children less than eighteen years of age or with pregnant women, and all extreme poor (the latter group is regardless of having children). Eligibility is determined through partially verified means testing; households with income below the cutoffs are incorporated into the program. The conditions are prenatal and postnatal care sessions for pregnant women, adherence to a calendar of vaccinations for children up to age five, and a minimum level of school attendance for children ages six to seventeen. There are no conditions for the "fixed benefit" given to extremely poor households. In 2009, the government spent 0.4 percent of GDP on 41.2 million individuals living in beneficiary families, and the average benefit per person living in a beneficiary household was US$0.35 PPP per day. Benefício de Prestação Continuada (Continued Payment Benefits, BPC) is a noncontributory pension program that provides a monthly monetary transfer of one minimum salary (465 reais per month or US$8.83 PPP per day in 2009) to elderly poor or incapacitated poor. Elderly means sixty-five years old and older, and incapacitated is determined by doctors based on ability to work. In 2009, the government spent 0.5 percent of GDP on this program and there were 3.2 million beneficiaries and the average benefit per person living in a beneficiary household was US$2.18 PPP per day. Other social transfers such as Food Transfer Programs (Programa de Aquisição de Alimentos, PAA), Unemployment Insurance, and other smaller programs represented 0.9 percent of GDP. The main indirect subsidy in Brazil, the Social Tariff on Electric Energy (TSEE), is a price subsidy on energy for low-income households with total energy consumption below 220 kilowatt hours per month. In 2009, the average benefit per person in a beneficiary household was US$0.36 PPP per day. One of the structural features of the Guatemalan tax system is the low level of tax revenues.16 Total tax revenue as a percentage of GDP (including contributions to the social security system) is only 12.2 percent. Direct taxes composed almost 27 percent of the total, while indirect taxes were a little over 60 percent. Of total direct taxes, personal income tax is only 2.9 percent. The VAT is over 40 percent of total tax revenues. The VAT general rate is 12 percent and zero for exports. Generic medicines, certain financial services, education, low-value sales of food bought in cantonal and municipal markets (value less than 100 quetzales, approximately $13), and resale of real estate property are exempt. Other indirect taxes, which include excise taxes on consumption of gasoline and diesel, beverages, tobacco, stamp tax, and cement, amount to 12.6 percent of total tax revenues. In 2010, there were five main cash transfers programs: a conditional cash transfer (CCT) called Mi Familia Progresa (MIFAPRO), a noncontributory pension program called Economic Assistance Program for the Elderly (Programa de Aporte Económico del Adulto Mayor), a food transfer program called Bolsa Solidaria, two educational scholarships program called Bolsa de Estudio and Becas Solidarias, and a small cash transfer for transportation called Bono de Transporte. From this list, the most relevant programs are MIFAPRO and the noncontributory pension. Together they represented 0.5 percent of GDP; the rest were very small programs that altogether amount to 0.1 percent of GDP. In 2010, spending on MIFAPRO program was 0.4 percent of GDP, the number of beneficiaries was 2.7 million and the average per capita transfer among beneficiary households was about $57 PPP dollars per year. The program covered 51 percent of the indigenous poor and 23 percent of the nonindigenous poor. The most important consumption subsidies were a subsidy on electricity for households that consume less than 300 kilowatt hours per month, and a public transportation subsidy that was delivered to owners of public buses (in Guatemala City and major cities of the country). Both subsidies represent 0.3 percent of GDP, and the beneficiaries lived in urban areas. Indicators of Fiscal Redistribution in the Ethnoracial Space A fiscal incidence analysis designed to assess how governments reduce the welfare gap between ethnic and racial groups needs to identify indicators that can capture how income inequities across these groups change with fiscal interventions. The most obvious indicator of ethnoracial income inequality is the ratio of per capita income between the indigenous and nonindigenous populations and between Afro-descendants and the white population. A second commonly used indicator is the contribution of inequality between ethnic or racial groups to overall inequality. This is usually measured by identifying the between- and within-group inequality with a standard decomposable inequality indicator such as the Theil index. In addition, a society with high ethnoracial equity should feature fairly equal opportunities across ethnic and racial groups. To assess the extent to which fiscal policy equalizes opportunities, following the ideas originally set out by Roemer (1998) and their application by Barros et al. (2009), I propose to use an indicator that can track the extent to which taxes and transfers reduce the inequality associated with circumstances. Circumstances are predetermined factors that are not dependent on an individual's effort, such as ethnicity and race, gender, place of birth, and parents' education or parents' income. In these national surveys, information on parents or place of birth is not available. Thus, for our purposes, circumstances include race or ethnic group, gender, and location (rural or urban).17 A third key indicator is, of course, the extent to which the indigenous and Afro-descendant populations are disproportionately underrepresented among the higher income groups and disproportionately overrepresented among the poor. As stated before, if whites and nonwhites were similarly affected by social circumstances and market forces, one would expect the share of the nonwhite population at every income stratum to be roughly equal to its share in the total population, and the probability of being poor to be approximately the same for whites and nonwhites. The probability of being poor is simply the head count ratio (also known as the incidence of poverty), that is, the ratio of the poor population divided by the total.18 The question is to what extent fiscal redistribution reduces these measures of inequity across ethnic and racial groups. In order to estimate the impact of fiscal redistribution, we need to calculate the above indicators for the three different income concepts presented in Figure 1: market income, disposable income, and consumable income. Ethnic and racial inequality in all three countries is high (Table 1). Per capita market (prefiscal) income of the white population is between 60 percent and two times higher than the Afro-descendants or indigenous population's income. Also shown in Table 1, the indigenous and Afro-descendant populations represent a considerably larger share of the poor than they do of the total population. The probability of being poor (measured by the head count ratio using national extreme poverty lines) is between two and three times higher for indigenous and Afro-descendants than whites.19 Although not shown in the table, average educational attainment levels are roughly between two and three years lower for Afrodescendant or indigenous populations in all three countries. Ethnic and racial inequality before taxes and transfers: Bolivia, Brazil, and Guatemala. Bolivia (2009) Guatemala (2009–2010) Ratio of white/nonwhite average per capita market income 1.5 2.1 2.1 Contribution of between-inequality component to overall inequality (%) 4.9 9.1 8.5 Head count ratio of white population (%) 14.7 5.2 20.6 Head count ratio of nonwhite population (%) 31.5 14.6 46.6 Nonwhite population in total population (%) 54.2 50.8 40.7 Nonwhite population in poor population (%) 71.7 74.2 60.8 Source: Author's calculations based on the following sources: Bolivia: Paz Arauco et al. (2013); Brazil: Higgins and Pereira (2013); and Guatemala: Cabrera, Lustig and Moran (2015). Notes: Household surveys: Bolivia (Encuesta de Hogares, 2009), Brazil (Pesquisa de Orçamentos Familiares, 2009), and Guatemala (Encuesta Nacional de Ingresos y Gastos de las Familias, 2009–2010). All the measures presented above use prefiscal or market income, defined as gross wages and salaries, income from capital, private transfers and contributory pensions; it includes consumption of own production (except for Bolivia) and imputed rent for owner's occupied housing. The nonwhite population for Bolivia and Guatemala refer to the indigenous population; and in the case of Brazil, to the Afro-Brazilian (pardo and preto) population. Poverty is measured for per capita market income with national extreme poverty lines. The contribution of between ethnic and racial groups components to overall inequality corresponds to the "between" component of a standard decomposition of the Theil index. Fiscal Policy, Inequality, and Poverty in the Ethnoracial Space What is the impact of direct taxes and direct transfers on ethnic and racial inequality? Using the indicators described above, Table 2 reveals that the impact is negligible. Although the indicators tend to move in the right direction, the order of magnitude of the change is quite small or nonexistent. The ratio of average per capita incomes by ethnicity or race declines by one decimal point (Bolivia and Brazil) to nothing (Guatemala).20 Inequality of opportunity also declines by a relatively small amount. While fiscal policy reduces overall inequality in all three countries, it is interesting to note that the contribution of the between-race component in Brazil increases. This means that in Brazil, fiscal policy is making intraracial inequality fall at a faster rate than interracial inequality. Ethnoracial gaps before (market income) and after direct taxes and transfers (disposable income): Bolivia, Brazil, and Guatemala. Market income White/nonwhite average per capita income 1.5 1.5 2.1 2 2.1 2.1 Contribution of between-race inequality (%) 4.9 4.8 9.1 9.2 8.5 8.4 Inequality of opportunity 0.092 0.082 0.096 0.083 0.197 0.195 Head count ratio of white population (%) 14.7 13.4 5.2 3.1 20.6 20.2 Head count ratio of nonwhite population (%) 31.5 28.3 14.6 9.3 46.6 44.0 Notes: Household surveys: Bolivia (Encuesta de Hogares, 2009), Brazil (Pesquisa de Orçamentos Familiares 2009), and Guatemala (Encuesta Nacional de Ingresos y Gastos de las Familias, 2009–2010). All the measures presented above use market (pre-fiscal) income, defined as gross wages and salaries, income from capital, private transfers and contributory pensions; it includes consumption of own production (except for Bolivia) and imputed rent for owner's occupied housing. Disposable income equals market income minus personal income taxes and contributions to social security plus direct transfers (cash and near cash such as food and school uniforms). Poverty is measured for per capita income with national extreme poverty lines. Inequality of opportunity is measured by the mean log deviation of smoothed distribution with gender of head, location (rural or urban), and race/ethnicity as circumstances. Although the difference in the probability of being poor after direct taxes and transfers declines,21 as Table 3 shows the difference in head count ratios by ethnic group and race remain very large. More importantly, when the combined effect of both direct and consumption taxes net of transfers and subsidies is considered, fiscal interventions slightly reduce the differences in the probability of being poor between ethnic and racial groups in Guatemala and Bolivia but in Brazil there is no change. That is, in Brazil the effect of cash transfers on narrowing the difference in the probability of being poor between Afro-descendants and whites is completely wiped out by the effect of consumption taxes. Differences in probability of being poor by ethnic and racial group by income concept: market income, disposable income and consumable income. Difference in head count ratios for the nonwhite population minus head count ratios for white population in percentage points Consumable income Bolivia 16.8 14.9 15.0 Brazil 9.4 6.2 9.4 Guatemala 25.9 23.8 24.6 Source: Author's calculations based on the following sources: Bolivia: Paz Arauco et al. (2013); Brazil: Higgins and Pereira (2013); and Guatemala: Cabrera, Lustig, and Moran (2015). The probability of being poor is measured as the head count ratio. Market (prefiscal) income is defined as gross wages and salaries, income from capital, private transfers and contributory pensions; it includes consumption of own production (except for Bolivia) and imputed rent for owner's occupied housing. Disposable income equals market income minus personal income taxes and contributions to social security plus direct transfers (cash and near cash such as food and school uniforms). Consumable income is defined as disposable income minus consumption taxes (e.g., value-added taxes, sales taxes, etc.) plus consumption subsidies (e.g., food, energy, etc.). Incidence of Direct Cash Transfers across Ethnic and Racial Groups As seen above, fiscal interventions have little impact on the indicators we selected to measure the ethnoracial gap in the income space. In fact, when (net) indirect taxes are added, the differences in the probability of being poor do not change (Brazil) or fall only slightly (Bolivia and Guatemala). Are there specific characteristics of the fiscal system that may be associated with these rather disappointing outcomes? As shown in Figures 2, 3, 4, the incidence of direct cash transfers for the bottom deciles of Afro-descendants and indigenous populations is higher than for the bottom deciles of the nonindigenous and white populations. Thus, the fact that fiscal policy is so limited in its ability to reduce ethnoracial income gaps is more a consequence of the small size of cash transfers than their distribution between ethnic and racial groups. Bolivia: incidence of direct cash transfers by ethnic and racial groups (shares in percent). Author's calculations based on Paz Arauco et al. (2013). Brazil: incidence of direct cash transfers by ethnic and racial groups (shares in percent). Author's calculations based on Higgins and Pereira (2013). Guatemala: incidence of direct cash transfers by ethnic and racial groups (shares in percent). Author's calculations based on Cabrera, Lustig, and Moran (2015). The flagship cash transfers are Bono Juancito Pinto in Bolivia, Bolsa Família in Brazil, and Mi Familia Progresa in Guatemala.22 The share of GDP (in the year of the survey) allocated to each is only 0.3, 0.4, and 0.3 percent, respectively. Given the huge income gaps and the large differences in numbers of poor people, it is not surprising that redistributive policy achieves so little reduction in inequity in the ethnoracial space. Furthermore, while it is generally true that per capita cash transfers for the indigenous and Afro-descendants are systematically higher in the three countries, there is an exception: the Special Circumstances Pensions (Pensões e Outros Benefícios) in Brazil. The cash transfers in this program are designed to smooth the impact of idiosyncratic shocks such as accident at work, sickness, or other individual shocks. In 2009, there were about 2.9 million beneficiaries (INSS 2010) and the average benefit per person was $5.22 PPP per day. Although these transfers are funded by the contributory pension system, they are considered noncontributory because they have low or no requirements in terms of length of time of contribution. To be eligible, however, individuals must be registered in the social security system. There clearly must be a factor that explains why Afro-descendants are less likely to register. This is a result that deserves further investigation so that this bias toward the white population can be corrected. The importance of reducing ethnoracial inequalities arises both from what this divide means ethically as well as its causes and consequences. Today's ethnic and racial inequalities are often the product of morally condemnable societal actions such as discrimination in the present and subjugation of indigenous groups and slavery in the past. Due to the discrimination that indigenous peoples and Afro-descendants face in Latin America and the resulting gaps, these ethnic and racial groups are what philosophers would call "morally relevant" groups and, as a result, policies developed to correct these gaps would be deemed ethically acceptable (Nickel 2002). In addition, ethnic and racial inequalities are found to be associated with lower overall development and growth (Alesina et al. 2012; Easterly and Levine 1997). Thus, addressing ethnoracial inequalities may have the additional benefit of generating higher welfare levels for everyone. While fiscal redistribution can play a role in reducing income inequality in the ethnoracial space, our results show that the impact is very limited. Using several indicators, I showed that taxes and transfers reduce ethnoracial inequality very little or not at all. This is due in large part to the small magnitude of government spending on targeted cash transfers programs. However, the difference in, for instance, the number of poor people by ethnicity and race is so large, that it is unlikely it can be significantly reduced through just income transfers. Nonetheless, at the minimum, the design of transfer programs should avoid exacerbating ethnoracial inequalities. As discussed above, this is the case with the Special Circumstances Pensions in Brazil, which disproportionately benefit the white population because Afro-Brazilians are less likely to be enrolled. 1Furthermore, these groups have lower education levels, lower earnings and access to services, and are more likely to work in low-productivity jobs in the informal sector. 2The framework has been applied to close to thirty low- and middle-income countries. Results can be found in Afkar, Jellema, and Wai-Poi (2017), Alam, Inchauste, and Serajuddin (2017), Aristy-Escuder et al. (2017), Arunatilake, Inchauste, and Lustig (2017), Beneke, Lustig, and Oliva (2017), Bucheli et al. (2014), Cabrera, Lustig, and Moran (2015), Cancho and Bondarenko (2017), Enami (2017), Higgins and Pereira (2014), Higgins and Lustig (2016), Higgins et al. (2016), Hill et al. (2017), ICEFI (2017a, 2017b), Inchauste et al. (2015), Jellema et al. (2017), Lopez-Calva et al. (2017), Lustig (2015, 2016, 2017a, 2017b), Lustig and Pessino (2014), Lustig, Pessino, and Scott (2014), Llerena et al. (2015), Martinez-Aguilar et al. (2017), Melendez and Martinez (2015), Molina (2016), Paz Arauco et al. (2014), Rossignolo (2016), Sauma and Trejos (2014), Scott (2014), Shimeles et al. (2016), Younger and Khachatryan (2017), Younger, Myamba, and Mdadila (2016), and Younger, Osei-Assibey, and Oppong (2017). Also see CEQ Working Paper series available at: www.commitmentoequity.org. 3In this article, fiscal redistribution, fiscal policy, fiscal interventions, and taxes and transfers policy are used interchangeably. 4In-kind transfers in the form of free or quasi-free education and health services are not analyzed here. 5Although self-identification can be a subjective term and discrimination is more likely to be based on how others view an individual, data in Latin America is typically collected using the self-identification approach. In Bolivia and Guatemala, surveys allow for individuals to identify with a specific indigenous people. For the purpose of this analysis, we have followed the convention of aggregating all of these groups into one so as to ensure that sample sizes are ample enough to perform the analysis. In cases where an individual is not asked to self-identify in the survey questionnaire, the identity of the head of the household is imputed if the individual is a direct relative. Otherwise the person is considered "other" and as such not included in this analysis. 6In Brazil, although data is available for whites, Asians, blacks (pretos), pardos (literally, brown), and indigenous, for the purpose of this analysis "nonwhite population" refers to Afro-Brazilians, which is the combination of pretos and pardos, with pardos representing the majority of the group (43 percent of total population). Disaggregated data for the different ethnoracial groups is available on request. 7While the ethnoracial divide exists well beyond the income/consumption space, for the purposes of a fiscal incidence analysis we focus on the latter. 8Head count ratio is the total number of people living below the poverty line as a proportion of the total population. 9These studies were produced under the Commitment to Equity and Inter-American Development Bank (CEQ-IADB) project "Incidence of Taxes and Social Spending by Ethnicity and Race," led by Nora Lustig. 10This section draws from Lustig and Higgins (2013) and Lustig (2017a). 11For example, Martinez-Vazquez (2008, 123) finds that "the results obtained with more realistic and laborious assumptions on elasticities tend to yield quite similar results." 12Results presented here are for the scenario in which market income includes contributory pensions. 13For more details on Bolivia, see Paz Arauco et al. (2014). 14All the amounts that are in purchasing power parity (PPP) refer to the PPP conversion factors of 2005. 15For more details on Brazil, see Higgins and Pereira (2014). 16For more details on Guatemala, see Cabrera, Lustig, and Moran (2015). 17Once each individual's circumstances set has been identified, the mean income of each circumstances set (i.e., the mean income of all individuals in that circumstances set) is calculated for the "prefiscal" and the "postfiscal" income. Let sijM1 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ s_i^j \] \end{document} indicate the income of each individual i, which in the smoothed distribution equals the mean income for income concept j (where the latter can be before taxes and transfers or after taxes and/or transfers; see Figure 1) of everyone in individual i's circumstances set. Each individual is attributed the mean income of their circumstances set, and this income distribution is called the smoothed income distribution. Inequality is then measured over the smoothed income distribution for each income concept associated with taxes and transfers. Here the mean log deviation was used, which gives the measure of inequality of opportunity (in levels) by income concept. The mean log deviation of the smoothed distribution (for income concept j) is calculated as 1n∑ iln(μjsij) M2 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \frac{1}{n}\mathop {\sum \,}\limits_i {\rm{l}\rm{n}}\left( {\frac{{{\mu ^j}}}{{s_i^j}}} \right) \] \end{document} where μj is the mean income of the population for income concept j (either the original or smoothed distribution can be used to calculate μj since they have the same mean by definition), and sijM3 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ s_i^j \] \end{document} is defined above. 18In addition to the present list, one may want to consider other outcome variables that are not necessarily present in the income or public services space. For example, one may want to assess access to urban infrastructure (water and sanitation, street lighting, and so on) or the extent to which the fiscal system exacerbates or reduces occupational segregation or discrimination. These are not included in this article. 19The national extreme poverty lines used in the analysis are the following. For Bolivia, Paz Arauco et al. (2013) used the national official poverty lines for urban areas equal to $3.05 per day in 2005 purchasing power parity dollars, and rural areas equal to $2.31 per day in 2005 purchasing power parity dollars. Brazil does not have an official poverty line; thus, Higgins and Pereira (2013) used the extreme poverty lines reported by IPEA, which range from $1.18 to $2.18 per day in 2005 purchasing power parity dollars (Instituto de Pesquisa Econômica Aplicada, www.ipeadata.gov.br/doc/LinhasPobrezaRegionais.xls). Guatemala does not have an official poverty line either. For the purpose of this analysis, Cabrera, Lustig, and Moran (2015) defined the extreme poverty line as the amount needed to purchase a basic basket of food, or $2.03 per day in 2005 purchasing power parity dollars. To put these lines in perspective, the World Bank defines extreme poverty in Latin America with a poverty line that equals $2.50 per day in 2005 purchasing power parity dollars. 20Furthermore, it is quite possible that the statistical significance of these differences is zero. Unfortunately, measures of statistical significance are not available. 21Recall that for all the poverty measures here I use national extreme poverty lines. 22For a description of these programs, see, for Bolivia: Arauco et al. (2014); for Brazil: Higgins and Pereira (2014); and for Guatemala: Cabrera, Lustig and Moran (2015). This article is part of "Incidence of Taxes and Social Spending by Ethnicity and Race," a joint project of the Commitment to Equity (CEQ) Institute and the Gender and Diversity Division of the Inter-American Development Bank. Led by Nora Lustig since 2008, the CEQ is a joint initiative of the Center for Inter-American Policy and Research and the Department of Economics at Tulane University and the Inter-American Dialogue (www.commitmentoequity.org). For their very useful comments and feedback on earlier drafts, I am very grateful to Suzanne Duryea, Ariel Fiszbein, Miguel Jaramillo, Andrew Morrison, Judith Morrison, Marcos Robles, and participants of the November 21, 2013, and May 12, 2014, seminars at the Inter-American Development Bank, as well as to José R. Jouve-Martín and Philip Oxhorn, editors of the special issue of the Latin American Research Review, and to two anonymous reviewers. I am also very grateful to Adam Ratzlaff for his excellent research assistantship as well as to Ana Lucia Iturriza and Eliana Rubiano for their valuable support in the coordination of the project. Afkar, Rythia, Jon Jellema, and Matthew Wai-Poi. 2017. "The Distributional Impact of Fiscal Policy in Indonesia." In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. Alam, Shamma A., Gabriela Inchauste, and Umar Serajuddin. 2017. "The Distributional Impact of Fiscal Policy in Jordan." In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. Alesina, Alberto, Igier Stelios Michalopoulos, and Elias Papaioannou. 2012. "Ethnic Inequality." Unpublished manuscript, March. Aristy-Escuder, Jaime, Maynor Cabrera, Blanca Moreno-Dodson, and Miguel Sanchez-Martin. 2017. "Fiscal Policy and Redistribution in the Dominican Republic." In Commitment to Equity Handbook. Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Arunatilake, Nisha, Gabriela Inchauste, and Nora Lustig. 2017. "The Incidence of Taxes and Spending in Sri Lanka." In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. Barros, Ricardo, Francisco H. G. Ferreira, José Molinas Vega, and Jaime Saavedra. 2009. Measuring Inequality of Opportunities in Latin America and the Caribbean. Washington, DC: The World Bank. Beneke, Margarita, Nora Lustig, and Jose Andres Oliva. 2017. "The Impact of Taxes and Social Spending on Inequality and Poverty in El Salvador." In Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution Press and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Bucheli, Marisa, Nora Lustig, Maximo Rossi, and Florencia Amabile. 2014. "Social Spending, Taxes and Income Redistribution in Uruguay." In "Analyzing the Redistributive Impact of Taxes and Transfers in Latin America," edited by Nora Lustig, Carola Pessino, and John Scott, special issue. Public Finance Review 42(3): 413–433. DOI: https://doi.org/10.1177/1091142113493493 Cabrera, Maynor, Nora Lustig, and Hilcías E. Moran. 2015. "Fiscal Policy, Inequality and the Ethnic Divide in Guatemala." World Development 76: 263–279. DOI: https://doi.org/10.1016/j.worlddev.2015.07.008 Cancho, Cesar, and Elena Bondarenko. 2017. "The Distributional Impact of Fiscal Policy in Georgia." In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. De Ferranti, David, Guillermo E. Perry, Francisco Ferreira, and Michael Walton. 2004. Inequality in Latin America: Breaking with History? Washington, DC: World Bank. DOI: https://doi.org/10.1596/0-8213-5665-8 Easterly, William, and Ross Levine. 1997. "Africa's Growth Tragedy: Policies and Ethnic Divisions." Quarterly Journal of Economics 112(4): 1203–1250. DOI: https://doi.org/10.1162/003355300555466 Enami, Ali. 2017. "Measuring the Effectiveness of Taxes and Transfers in Fighting Poverty and Reducing Inequality in Iran." In Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Hall, Gillette, and Harry Anthony Patrinos, eds. 2006. Indigenous Peoples, Poverty and Human Development in Latin America, 1994–2004. New York: Palgrave Macmillan. Higgins, Sean, and Claudiney Pereira. 2013. "Fiscal Incidence by Race and Ethnicity: Master Work-Book for Brazil." Prepared by the Commitment to Equity Project for the Inter-American Development Bank, Programa para Mejorar las Estadísticas de Raza y Etnicidad para el Análisis y Formulación de Políticas. Higgins, Sean, and Claudiney Pereira. 2014. "The Effects of Brazil's Taxation and Social Spending on the Distribution of Household Income." In "Analyzing the Redistributive Impact of Taxes and Transfers in Latin America," edited by Nora Lustig, Carola Pessino and John Scott, special issue. Public Finance Review 42(3): 346–367. DOI: https://doi.org/10.1177/1091142113501714 Hill, Ruth, Gabriela Inchauste, Nora Lustig, Eyasu Tsehaye, and Tassew Woldehanna. 2017. "A Fiscal Incidence Analysis for Ethiopia." In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. ICEFI (Instituto Centroamericano de Estudios Fiscales). 2017a. "Incidencia de la politica fiscal en el ambito rural de Centro America: El caso de Honduras." CEQ Working Paper No. 51, CEQ Institute, Tulane University, IFAD, and ICEFI. ICEFI (Instituto Centroamericano de Estudios Fiscales). 2017b. "Incidencia de la politica fiscal en la desigualdad y la pobreza en Nicaragua." CEQ Working Paper No. 52, CEQ Institute, Tulane University, IFAD, and ICEFI. Inchauste, Gabriela, Nora Lustig, Mashekwa Maboshe, Catriona Purfield, Ingrid Woolard, and Precious Zikhali. 2015. "The Distributional Impact of Fiscal Policy in South Africa." Policy Research Working Paper No. 174. Washington, DC: World Bank. DOI: https://doi.org/10.1596/1813-9450-7194 Jellema, Jon, Astrid Haas, Nora Lustig, and Sebastian Wolf. 2017. "The Impact of Taxes, Transfers, and Subsidies on Inequality and Poverty in Uganda." In Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Lopez-Calva, Luis Felipe, Nora Lustig, Mikhail Matytsin, and Daria Popova. 2017. "Who Benefits from Fiscal Redistribution in Russia?" In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. Lustig, Nora. 2015. "The Redistributive Impact of Government Spending on Education and Health: Evidence from 13 Developing Countries in the Commitment to Equity Project." In Inequality and Fiscal Policy, edited by Sanjeev Gupta, Michael Keen, Benedict Clements, and Ruud de Mooij, chapter 16. Washington, DC: International Monetary Fund. Lustig, Nora. 2016. "Inequality and Fiscal Redistribution in Middle Income Countries: Brazil, Chile, Colombia, Indonesia, Mexico, Peru and South Africa." Journal of Globalization and Development 7(1): 17–60. DOI: https://doi.org/10.1515/jgd-2016-0015 Lustig, Nora, ed. 2017a. Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty. Washington, DC: Brookings Institution Press and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Lustig, Nora. 2017b. "El impacto del sistema tributario y el gasto social en la distribución del ingreso y la pobreza en América Latina." El Trimestre Economico, no. 335 (July-September): 493–568. Lustig, Nora, and Carola Pessino. 2014. "Social Spending and Income Redistribution in Argentina in the 2000s: The Rising Role of Noncontributory Pensions." Public Finance Review 42(3): 304–325. Lustig, Nora, and Sean Higgins. 2013. "Commitment to Equity Assessment (CEQ): Estimating the Incidence of Social Spending, Subsidies and Taxes, Handbook." CEQ Working Paper No. 1, July 2011; revised January 2013. Center for Inter-American Policy and Research and Department of Economics, Tulane University and Inter-American Dialogue. Martinez-Aguilar, Sandra, Alan Fuchs, Eduardo Ortiz-Juarez, and Giselle del Carmen. 2017. "The Impact of Fiscal Policy on Inequality and Poverty in Chile." In Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Martinez-Vazquez, Jorge. 2008. "The Impact of Budgets on the Poor: Tax and Expenditure Benefit Incidence Analysis." In Public Finance for Poverty Reduction: Concepts and Case Studies from Africa and Latin America, edited by Blanca Moreno-Dodson and Quentin Wodon, 113–162. Directions in Development Series. Washington, DC: World Bank. Melendez, Marcela, and Valentina Martinez. 2015. "CEQ Master Workbook: Colombia. Version: December 17, 2015," prepared by the CEQ Data Center on Fiscal Redistribution, CEQ Institute, Tulane University, and Inter-American Development Bank. Molina, Emiro. 2016. "CEQ Master Workbook: Venezuela. Version: November 15, 2016," prepared by the CEQ Data Center on Fiscal Redistribution, CEQ Institute, Tulane University. Nickel, James W. 2002. "Discrimination and Morally Relevant Characteristics." In The Affirmative Action Debate, edited by Steven M. Cahn, 3–4. New York: Routledge. Ñopo, Hugo. 2012. New Century, Old Disparities: Gender and Ethnic Earnings Gaps in Latin America and the Caribbean. Washington, DC: World Bank; Inter-American Development Bank. Paz Arauco, Verónica, George Gray Molina, Wilson Jiménez Pozo, and Ernesto Yáñez Aguilar. 2013. "Fiscal Incidence by Race and Ethnicity: Master Workbook for Bolivia," prepared by the Commitment to Equity Project for the Inter-American Development Bank, Programa para Mejorar las Estadísticas de Raza y Etnicidad para el Análisis y Formulación de Políticas. Paz Arauco, Verónica, George Gray Molina, Wilson Jiménez Pozo, and Ernesto Yáñez Aguilar. 2014. "Explaining Low Redistributive Impact in Bolivia." In "Analyzing the Redistributive Impact of Taxes and Transfers in Latin America," edited by Nora Lustig, Carola Pessino, and John Scott, special issue, Public Finance Review 42(3): 326–345. DOI: https://doi.org/10.1177/1091142113496133 Pereira, Claudiney. 2017. "Ethno-Racial Poverty and Income Inequality in Brazil." In Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution and CEQ Institute, Tulane University. http://www.commitmentoequity.org/publications/handbook.php. Roemer, John E. 1998. Equality of Opportunity. Cambridge, MA: Harvard University Press. Rossignolo, Dario. 2016. "Taxes, Expenditures, Poverty and Income Distribution in Argentina." CEQ Working Paper No. 45, CEQ Institute, Tulane University. Also in Commitment to Equity Handbook: Estimating the Impact of Fiscal Policy on Inequality and Poverty, edited by Nora Lustig. Washington, DC: Brookings Institution Press and CEQ Institute, Tulane University, 2017. http://www.commitmentoequity.org/publications/handbook.php. Sauma, Pablo, and Juan Diego Trejos. 2014. "Gasto público social, impuestos, redistribución del ingreso y pobreza en Costa Rica." CEQ Working Paper No. 18, Center for Inter-American Policy and Research and Department of Economics, Tulane University, and Inter-American Dialogue. Scott, John. 2014. "Redistributive Impact and Efficiency of Mexico's Fiscal System." In "Analyzing the Redistributive Impact of Taxes and Transfers in Latin America," edited by Nora Lustig, Carola Pessino, and John Scott, special issue. Public Finance Review 42(3): 368–390. DOI: https://doi.org/10.1177/1091142113497394 Shimeles, Abebe, Ahmed Moummi, Nizar Jouini, and Nora Lustig. 2016. "Fiscal Incidence and Poverty Reduction: Evidence from Tunisia." CEQ Working Paper No. 38, CEQ Institute, Tulane University. Younger, Stephen D., and A. Khachatryan. 2017. "Fiscal Incidence in Armenia." In The Distributional Impact of Fiscal Policy: Experience from Developing Countries, edited by Gabriela Inchauste and Nora Lustig. Washington, DC: World Bank. Younger, Stephen D., Eric Osei-Assibey, and Felix Oppong. 2017. "Fiscal Incidence in Ghana." Review of Development Economics. Published electronically January 11, 2017. DOI: https://doi.org/10.1111/rode.12299 Younger, Stephen D., Flora Myamba, and Kenneth Mdadila. 2016. "Fiscal Incidence in Tanzania." African Development Review 28(3): 264–276. DOI: https://doi.org/10.1111/1467-8268.12204 Lustig, N., 2017. Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala. Latin American Research Review, 52(2), pp.208–220. DOI: http://doi.org/10.25222/larr.90 Lustig N. Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala. Latin American Research Review. 2017;52(2):208–20. DOI: http://doi.org/10.25222/larr.90 Lustig, N. (2017). Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala. Latin American Research Review, 52(2), 208–220. DOI: http://doi.org/10.25222/larr.90 Lustig N, 'Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala' (2017) 52 Latin American Research Review 208 DOI: http://doi.org/10.25222/larr.90 Lustig, Nora. 2017. "Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala". Latin American Research Review 52 (2): 208–20. DOI: http://doi.org/10.25222/larr.90 Lustig, Nora. "Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala". Latin American Research Review 52, no. 2 (2017): 208–20. DOI: http://doi.org/10.25222/larr.90 Lustig, N.. "Fiscal Redistribution and Ethnoracial Inequality in Bolivia, Brazil, and Guatemala". Latin American Research Review, vol. 52, no. 2, 2017, pp. 208–20. DOI: http://doi.org/10.25222/larr.90
CommonCrawl
Quantaggle Variational Quantum Eigensolver (VQE) Subspace-Search VQE (SS-VQE) Quantum Subspace Expansion (QSE) Multistate, Contracted VQE (MC-VQE) VQE for excited states by including overlaps to cost function VQE is an algorithm for finding an (approximate) ground state of a given Hamiltonian $H$ by NISQ devices. It is based on the variational method of quantum mechanics and divides the task of finding the ground state into ones performed by classical computers and quantum computers. The algorithm works as follows: Prepare an initial quantum state $|0\rangle$ on quantum computers. Generate an ansatz quantum state $|\psi(\vec{\theta})\rangle$ by applying a quantum circuit $U(\vec{\theta})$ parametrize by (classical) parameters $\vec{\theta}$, $|\psi(\vec{\theta})\rangle = U(\vec{\theta}) |0\rangle$. The expectation value of the Hamiltonian $H$ is measured on quantum computers, which is denoted as $E(\vec{\theta}) = \langle \psi(\vec{\theta})| H|\psi(\vec{\theta})\rangle$. Update the parameters $\vec{\theta}$ to get $E(\vec{\theta})$ smaller by using a classical optimizing algorithm. repeat the steps 1~4 until convergence. If a minimum of $E(\vec{\theta})$ is reached at $\vec{\theta}'$, $|\psi(\vec{\theta}')\rangle$ is close to the ground state of $H$ and $E(\vec{\theta}')$ is close to the ground state energy (see variational method of quantum mechanics at Wikipedia). $U(\vec{\theta})$ determines the accuracy of the approximate ground state by VQE. "A variational eigenvalue solver on a photonic quantum processor", A. Peruzzo et al., Nat. Comm., 5, 4213 (2014). SS-VQE is an extension of VQE for finding excited states of a given Hamiltonian $H$. Prepare a set of $k$ mutually orthogonal quantum states, $|\varphi_0 \rangle, ..., |\varphi_{k-1}\rangle$. For each $i$, generate $|\psi_{i}(\vec{\theta})\rangle = U(\vec{\theta})|\varphi_i\rangle$ and measure $ \langle\psi_{i}(\vec{\theta})|H|\psi_{i}(\vec{\theta})\rangle$ on quantum computers. sum up results of step 2 with some weights, $w_0 > \ldots > w_{k-1} > 0$, and compute a cost function $L(\vec{\theta})=\sum_{i}w_{i}\langle\psi_{i}(\vec{\theta})|H|\psi_{i}(\vec{\theta})\rangle$ on classical computers. Update the parameters $\vec{\theta}$ to make the cost function $L(\vec{\theta})$ smaller. It can be shown that when a minimum of the cost function $L(\vec{\theta})$ is reached at $\vec{\theta}'$, approximate eigenvectors and eigenstates of $H$ are obtained as $|\psi_0(\vec{\theta}')\rangle, ..., |\psi_{k-1}(\vec{\theta}')\rangle$ and $\langle\psi_0(\vec{\theta}')|H|\psi_0(\vec{\theta}')\rangle, ..., \langle\psi_{k-1}(\vec{\theta}')|H|\psi_{k-1}(\vec{\theta}')\rangle $. "Subspace-search variational quantum eigensolver for excited states", K. M. Nakanishi, K. Mitarai, and K. Fujii, https://arxiv.org/abs/1810.09434 QSE is an algorithm for finding excited states of a given Hamiltonian $H$. It resembles the configuration interaction method in quantum chemistry. Determine a set of excitation operators $E_1,..., E_M$ and a reference (approximate) ground state $|\psi_{GS}\rangle$. Single excitations of electrons, $ c_j^\dagger c_l (j,l=0,1,...)$, are one of the common choices of the excitation operators. For notational convenience, we add $E_0= I$ (identity operator) to the set of excitation operators. Note that $E_0|\psi_{GS}\rangle = |\psi_{GS}\rangle $. Prepare the approximate ground state $|\psi_{GS}\rangle$, obtained by the VQE algorithm (or other methods), on quantum computers. Measure quantities $ h_{ij} = \langle \psi_{GS}| E_i^\dagger H E_j |\psi_{GS}\rangle$ and $ S_{ij} = \langle \psi_{GS}| E_i^\dagger E_j |\psi_{GS}\rangle$ on quantum computers ($i,j=0,...,M$). Diagonalize the Hamiltonian within the subspace spanned by $E_0|\psi_{GS}\rangle, ..., E_M|\psi_{GS}\rangle$. Namely, solve a generalized eigenvalue problem within the subspace, $hC=SCE'$, where $C$ is the coefficient vector for (approximate) eigenvectors and $E'$ is a diagonal matrix whose diagonal elements are (approximate) eigenvalues of $H$. We note that the QSE also plays a role of mitigating noise errors inevitable in NISQ devices (see reference below). "Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states", J. R. McClean et al., Phys. Rev. A 95, 042308 (2017). MC-VQE is an extension of VQE to calculate excited states of a given Hamiltonian $H$. It is similar to a simple version of the SSVQE algorithm. Calculate configuration interaction singles (CIS) states $|CIS\rangle_i $ on classical computer and compute a quantum circuit to prepare them on quantum computers, $|CIS\rangle_i = U_i|0\rangle$ ($i=0,1,...$). For each $i$, generate $|\psi_{i}(\vec{\theta})\rangle = U(\vec{\theta})|CIS_i\rangle=U(\vec{\theta})U_i|0\rangle$ and measure $ \langle\psi_{i}(\vec{\theta})|H|\psi_{i}(\vec{\theta})\rangle$ on quantum computers. Sum up results of step 2 and compute a cost function $L(\vec{\theta})=\sum_i \langle\psi_i(\vec{\theta})|H|\psi_i(\vec{\theta})\rangle$ on classical computers. After convergence of $L(\vec{\theta})$ at $\vec{\theta}'$, diagonalize the Hamiltonian $H$ within the subspace of $|\psi_i(\vec{\theta}')\rangle : (i=0,1,...)$ in the same way as the QSE algorithm above. "Quantum Computation of Electronic Transitions Using a Variational Quantum Eigensolver", R. M. Parrish et al., Phys. Rev. Lett. 122, 230401 (2019). These two papers below propose an algorithm to obtain excited states of a given Hamiltonian $H$ sequentially by including the overlap amplitudes between the ansatz state $|\psi(\vec{\theta})\rangle$ and previously-found eigenstates to the cost function of VQE. "Variational Quantum Computation of Excited States", O. Higgott, D. Wang, and S. Brierley, Quantum 3, 156 (2019). "Variational quantum algorithms for discovering Hamiltonian spectra", T. Jones et al., Phys. Rev. A 99, 062304 (2019). Run by QunaSys Inc.
CommonCrawl
Numerical solution of nonlinear stochastic Itô–Volterra integral equations based on Haar wavelets Jieheng Wu1, Guo Jiang ORCID: orcid.org/0000-0001-9484-40171 & Xiaoyan Sang1 Advances in Difference Equations volume 2019, Article number: 503 (2019) Cite this article In this paper, an efficient numerical method is presented for solving nonlinear stochastic Itô–Volterra integral equations based on Haar wavelets. By the properties of Haar wavelets and stochastic integration operational matrixes, the approximate solution of nonlinear stochastic Itô–Volterra integral equations can be found. At the same time, the error analysis is established. Finally, two numerical examples are offered to testify the validity and precision of the presented method. Stochastic integral equations are widely applied in engineering, biology, oceanography, physical sciences, etc. There systems are dependent on a noise source, such as Gaussian white noise. As we all know, many stochastic Volterra integral equations do not have exact solutions, so it makes sense to find more precise approximate solutions to stochastic Volterra integral equations. There are different numerical methods to stochastic Volterra integral equations, for example, orthogonal basis methods [1,2,3,4,5,6,7,8,9,10], wash series methods [11, 12], and polynomials methods [13,14,15,16]. In [1], Fakhrodin studied linear stochastic Itô–Volterra integral equations (SIVIEs) through Haar wavelets (HWs). In [3], Maleknejad et al. also considered the same integral equations by applying block pulse functions (BPFs). In [9], Heydari et al. solved linear SIVIEs by the generalized hat basis functions. Meanwhile, in line with the same hat functions, Hashemi et al. also presented the numerical method of nonlinear SIVIEs driven by fractional Brownian motion [8]. Moreover, Jiang et al. applied BPFs to solve two-dimensional nonlinear SIVIEs [7]. In a general way, Zhang studied the existence and uniqueness solution to stochastic Volterra integral equations with singular kernels and constructed an Euler type approximation solution [17, 18]. Inspired by the discussion above, we use HWs to solve the following nonlinear SIVIE: $$ x(v)=x_{0}(v)+ \int_{0}^{v}{k(u,v)}\sigma \bigl(x(u) \bigr) \,du+ \int _{0}^{v}r(u,v)\rho \bigl(x(u) \bigr)\,dB(u), \quad v\in[0,1), $$ where \(x(v)\) is an unknown stochastic process defined on some probability space \((\varOmega,\mathcal{F},P)\), \(k(u,v)\) and \(r(u,v)\) are kernel functions for \(u, v\in[0,1)\), and \(x_{0}(v)\) is an initial value function. \(B(u)\) is a Brownian motion and \(\int _{0}^{v}r(u,v)\rho(x(u))\,dB(u)\) is Itô integral. σ and ρ are analytic functions that satisfy some bounded and Lipschitz conditions. In contrast to the above papers [1, 3, 7,8,9], the differences of this paper are as follows. Firstly, we construct a preparation theorem to deal with the nonlinear analytic functions. Secondly, the error analysis is strictly proved. Finally, compared with the reference [8], the numerical solution is more accurate and the calculation is simpler because of the use of HWs. Moreover, the rationality and effectiveness of this method can be further supported by two examples. The structure of the article is as follows. In Sect. 2, some preliminaries of BPFs and HWs are given. In Sect. 3, the relationship between HWs and BPFs is shown. In Sect. 4, the approximate solutions of (1) are derived. In Sect. 5, the error analysis of the numerical method is demonstrated. In Sect. 6, the validity and efficiency of the numerical method are verified by two examples. BPFs and HWs have been widely analysed by lots of scholars. For details, see references [1, 3]. Block pulse functions BPFs are denoted as $$\psi_{i}(v)= \textstyle\begin{cases} 1 & ih\leq v< (i+1)h,\\ 0 & \text{otherwise}, \end{cases} $$ for \(i=0,\ldots,m-1\), \(m=2^{L}\) for a positive integer L and \(h=\frac {1}{m}\), \(v\in[0,1)\). The basic properties of BPFs are shown as follows: disjointness: $$ \psi_{i}(v)\psi_{j}(v)=\delta_{ij} \psi_{i}(v), $$ where \(v\in[0,1)\), \(i, j=0,1,\ldots,m-1\), and \(\delta_{ij}\) is Kronecker delta; orthogonality: $$\int_{0}^{T}\psi_{i}(v) \psi_{j}(v)\,dt=h\delta_{ij}; $$ completeness property: for every \(g\in L^{2}[0,1)\), Parseval's identity satisfies $$ \int_{0}^{1}g^{2}(v)\,dv=\lim _{m\to\infty}\sum_{i=0}^{m}(g_{i})^{2} \bigl\Vert \psi _{i}(v) \bigr\Vert ^{2}, $$ $$g_{i}=\frac{1}{h} \int_{0}^{1}g(v)\psi_{i}(v)\,dv. $$ The set of BPFs can be represented by the following m-dimensional vector: $$ \varPsi_{m}(v)= \bigl( \psi_{0}(v),\ldots, \psi_{m-1}(v) \bigr)^{T},\quad v\in[0,1). $$ From the above description, it yields $$\begin{gathered} \varPsi_{m}(v)\varPsi_{m}^{T}(v)= \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \psi_{0}(v)&0&\cdots&0\\ 0&\psi_{1}(v)&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\psi_{m-1}(v) \end{array}\displaystyle \right )_{m\times m}, \\ \varPsi_{m}^{T}(v)\varPsi_{m}(v)=1, \\ \varPsi_{m}(v)\varPsi_{m}^{T}(v)F_{m}= \mathbf{D}_{F_{m}}\varPsi_{m}(v),\end{gathered} $$ where \(F_{m}= (f_{0},f_{1},\ldots,f_{m-1} )^{T}\) and \(\mathbf {D}_{F_{m}}=\operatorname{diag}(F_{m})\). Furthermore, for an \(m\times m\) matrix M, it yields $$\varPsi_{m}^{T}(v)\mathbf{M}\varPsi_{m}(v)= \hat{{M}}^{T}\varPsi_{m}(v), $$ where M̂ is an m-dimensional vector and its entries equal the main diagonal entries of M. In accordance with BPFs, every function \(x(v)\) which satisfies square integrable conditions in the interval \([0,1)\) can be approached as follows: $$x(v)\simeq x_{m}(v)=\sum_{i=0}^{m-1}x_{i} \psi_{i}(v)=X_{m}^{T}\varPsi _{m}(v)= \varPsi_{m}^{T}(v)X_{m}, $$ where the function \(x_{m}(v)\) is an approximation of the function \(x(v)\) and $$ X_{m}= (x_{0},x_{1},\ldots,x_{m-1} )^{T}. $$ Similarly, every function \(k(u,v)\) defined on \([0,1)\times[0,1)\) can be written as $$k(u,v)=\varPsi_{m_{1}}^{T}(u)\mathbf{K}\varPhi_{m_{2}}(v), $$ where \(\mathbf{K}= (k_{ij} )_{m_{1}\times m_{2}}\) with $$ k_{ij}\simeq\frac{1}{h_{1}h_{2}} \int_{0}^{1} \int_{0}^{1}k(u,v)\psi _{i}(u) \phi_{j}(v)\,du\,dv, $$ and \(h_{1}=\frac{1}{m_{1}}\), \(h_{2}=\frac{1}{m_{2}}\). Haar wavelets The notation and definition of HWs are introduced in this section (also see [1]). The set of orthogonal HWs is defined as follows: $$ h_{i}(v)=2^{\frac{l}{2}}h \bigl(2^{l}v-z \bigr), \quad i=2^{l}+z, 0 \leq z < 2^{l}, l \geq0, i,l,z \in\mathbb{N}, $$ where \(h_{0}(v)=1\), \(v \in[0,1)\), and $$h(v)= \textstyle\begin{cases} 1 & 0\leq v< \frac{1}{2},\\ -1 & \frac{1}{2} \leq v< 1. \end{cases} $$ For HWs \(h_{n}(v)\) defined in \([0,1)\), we have $$ \int_{0}^{1}h_{i}(v)h_{j}(v) \,dv= \delta_{ij}, $$ where \(\delta_{ij}\) is the Kronecker delta. In accordance with HWs, every function \(x(v)\) that satisfies square integrable conditions can be approached as follows: $$ x(v)=c_{0}h_{0}(v)+\sum _{i=1}^{\infty} c_{i}h_{i}(v), \quad v\in [0,1), i=2^{l}+z, 0 \leq z < 2^{l}, l \geq0, l,z\in \mathbb{N}, $$ $$ c_{i}= \int_{0}^{1}x(v)h_{i}(v)\,dv, \quad i=0 \quad\text{or}\quad i=2^{l}+z, 0 \leq z < 2^{l}, l \geq0, l,z \in\mathbb{N}. $$ We can see that when \(m=2^{L}\), equation (8) can be rewritten as $$ x(v)=c_{0}h_{0}(v)+\sum_{i=1}^{m-1} c_{i}h_{i}(v), \quad i=2^{l}+z, 0 \leq z < 2^{l}, l=0,1,\ldots,L-1. $$ Obviously, the vector form is as follows: $$ x(v)\simeq C_{m}^{T}H_{m}(v)=H_{m}^{T}(v)C_{m}, $$ where \(H_{m}= (h_{0}(v),h_{1}(v),\ldots,h_{m-1}(v) )^{T}\) and \(C_{m}= (c_{0},c_{1},\ldots,c_{m-1} )^{T}\) are HWs and Haar coefficients, respectively. Similarly, every function \(k(u,v)\) defined on \([0,1)\times[0,1)\) can be approached as follows: $$ k(u,v)=H_{m}^{T}(u)\mathbf{K}H_{m}(v), $$ where \(\mathbf{K}= (k_{ij} )_{m\times m}\) with $$ k_{ij}= \int_{0}^{1} \int_{0}^{1} k(u,v)h_{i}(u)h_{j}(v) \,du\,dv, \quad i,j=0,1,\ldots,m-1. $$ Haar wavelets and BPFs Some lemmas about HWs and BPFs are introduced in this section. For a detailed description, see the reference [1]. Lemma 3.1 Suppose that \(H_{m}(v)\)and \(\varPsi_{m}(v)\)are respectively given in (10) and (4), \(H_{m}(v)\)can be written in accordance with BPFs as follows: $$ H_{m}(v)=\mathbf{Q}\varPsi_{m}(v),\quad m=2^{L}, $$ where \(\mathbf{Q}= (Q_{ij} )_{m \times m}\)and $$ Q_{ij}=2^{\frac{j}{2}}h_{i-1} \biggl( \frac{2j-1}{2m} \biggr), \quad i,j=1,2,\ldots,m, i-1=2^{l}+z, 0 \leq z< 2^{l}. $$ See [1]. □ Suppose thatQis given in (11), then we have $$ \mathbf{Q}^{T}\mathbf{Q}=m\mathbf{I}, $$ whereIis an \(m \times m\)identity matrix. Suppose thatFis anm-dimensional vector, we have $$ H_{m}(v)H_{m}^{T}(v)F=\tilde{ \mathbf{F}}H_{m}(v), $$ where \(\tilde{\mathbf{F}}\)is an \(m \times m\)matrix and \(\tilde {\mathbf{F}}=\mathbf{Q}\bar{\mathbf{F}}\mathbf{Q^{-1}}\), \(\bar {\mathbf{F}}=\operatorname{diag}(\mathbf{Q}^{T}F)\). Suppose thatMis an \(m\times m\)matrix, we have $$ H_{m}^{T}(v)\mathbf{M} H_{m}(v)= \hat{M}H_{m}(v), $$ where \(\hat{M}=N^{T}\mathbf{Q}^{-1}\)is anm-dimensional vector and the entries of the vectorNare the diagonal entries of matrix \(\mathbf{Q}^{T}\mathbf{MQ}\). Suppose that \(\varPsi_{m}(v)\)is given in (4), we have $$ \int_{0}^{v} \varPsi_{m}(u) \,du \simeq \mathbf{P} \varPsi_{m}(v), $$ $$\mathbf{P}=\frac{h}{2} \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 1&2&2&\cdots&2\\ 0&1&2&\cdots&2\\ 0&0&1&\cdots&2\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1 \end{array}\displaystyle \right )_{m\times m}. $$ See [1, 3]. □ $$ \int_{0}^{v} \varPsi_{m}(u) \,dB(u) \simeq \mathbf{P}_{B} \varPsi_{m}(v), $$ $$\mathbf{P}_{B} = \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} B(\frac{h}{2}) & B(h) & B(h) &\cdots& B(h)\\ 0 & B(\frac{3h}{2})-B(h) & B(2h)-B(h) & \cdots& B(2h)-B(h) \\ 0 & 0 & B(\frac{5h}{2})-B(2h) & \cdots& B(3h)-B(2h)\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ 0 & 0 & 0 & \cdots& B (\frac{(2m-1)h}{2} )-B ((m-1)h ) \end{array}\displaystyle \right )_{m\times m}. $$ Suppose that \(H_{m}(v)\)is given in (10), we have $$ \int_{0}^{v} H_{m}(u) \,du \simeq \frac{1}{m}\mathbf{Q}\mathbf {P}\mathbf{Q}^{T}H_{m}(v)= \boldsymbol{\varLambda} H_{m}(v), $$ whereQandPare respectively given in (11) and Lemma 3.3, \(\boldsymbol{\varLambda}=\frac{1}{m}\mathbf {Q}\mathbf{P}\mathbf{Q}^{T}\). $$ \int_{0}^{v} H_{m}(u) \,dB(u) \simeq \frac{1}{m}\mathbf{Q}\mathbf {P}_{B}\mathbf{Q}^{T}H_{m}(v)= \boldsymbol{\varLambda} _{B}H_{m}(v), $$ whereQand \(\mathbf{P}_{B}\)are respectively given in (11) and Lemma 3.3and \(\boldsymbol{\varLambda}_{B}=\frac {1}{m}\mathbf{Q}\mathbf{P}_{B}\mathbf{Q}^{T}\). Numerical method For convenience, we set \(m_{1}=m_{2}=m\) and nonlinear SIVIE (1) can be solved by HWs. Firstly, a useful result for HWs is proved. Theorem 4.1 For the analytic functions \(\sigma (v)=\sum a_{j}v^{j}\), \(\rho(v)=\sum b_{j}v^{j}\)andjis a positive integer, then $$\begin{gathered} \sigma \bigl(x_{m}(v) \bigr)=\sigma^{T}(C_{m})H_{m}(v), \\ \rho \bigl(x_{m}(v) \bigr)=\rho^{T}(C_{m})H_{m}(v), \end{gathered} $$ where \(H_{m}(v)\)and \(C_{m}\)are derived in (10), $$\begin{gathered} \sigma^{T}(C_{m})= \bigl( \sigma(c_{0}), \sigma(c_{1}),\ldots,\sigma (c_{m-1}) \bigr), \\ \rho^{T}(C_{m})= \bigl(\rho(c_{0}), \rho(c_{1}),\ldots,\rho(c_{m-1}) \bigr).\end{gathered} $$ According to the disjointness property of HWs, we can deduce $$\begin{aligned} \sigma \bigl(x_{m}(v) \bigr) =& \sum a_{j} \bigl(x_{m}(v) \bigr)^{j} \\ =& \sum a_{j} \bigl(c_{0}h_{0}(v)+c_{1}h_{1}(v)+ \cdots+c_{m-1}h_{m-1}(v) \bigr)^{j} \\ =& \sum a_{j} \bigl(c_{0}^{j},c_{1}^{j}, \ldots,c_{m-1}^{j} \bigr)H_{m}(v) \\ =& \sigma^{T}(C_{m})H_{m}(v), \end{aligned}$$ $$ \sigma \bigl(x_{m}(v) \bigr)=\sigma^{T}(C_{m})H_{m}(v)=H_{m}^{T}(v) \sigma(C_{m}). $$ $$ \rho \bigl(x_{m}(v) \bigr)=\rho^{T}(C_{m})H_{m}(v)=H_{m}^{T}(v) \rho(C_{m}). $$ The proof is completed. □ Now, in order to solve (1), we approximate \(x(v)\), \(x_{0}(v)\), \(k(u,v)\), and \(r(u,v)\) in following forms by HWs: $$\begin{aligned}& x(v)\simeq x_{m}(v)=C_{m}^{T}H_{m}(v)=H_{m}^{T}(v)C_{m}, \end{aligned}$$ $$\begin{aligned}& x_{0}(v)\simeq x_{0_{m}}(v)={C_{0}}_{m}^{T}H_{m}(v)=H_{m}^{T}(v){C_{0}}_{m}, \end{aligned}$$ $$\begin{aligned}& k(u,v)\simeq{k_{m}(u,v)}=H_{m}^{T}(u) \mathbf {K}H_{m}(v)=H_{m}^{T}(v) \mathbf{K}^{T}H_{m}(u), \end{aligned}$$ $$\begin{aligned}& r(u,v)\simeq{r_{m}(u,v)}=H_{m}^{T}(u) \mathbf {R}H_{m}(v)=H_{m}^{T}(v) \mathbf{R}^{T}H_{m}(u), \end{aligned}$$ where \(C_{m}\) and \({C_{0}}_{m}\) are HWs coefficient vectors, K and R are HWs coefficient matrices. Substituting approximations (12)–(17) into (1), we have $$\begin{aligned} C_{m}^{T}H_{m}(v) = & {C_{0}}_{m}^{T}H_{m}(v) +H_{m}^{T}(v)\mathbf{K}^{T} \int_{0}^{v}H_{m}(u) H^{T}(u) \sigma(C_{m}) \,du \\ &{}+ H_{m}^{T}(v)\mathbf{R}^{T} \int_{0}^{v}H_{m}(u) H_{m}^{T}(u)\rho (C_{m})\,dB(u). \end{aligned}$$ By Lemma 3.3, we get $$\begin{aligned} C_{m}^{T}H_{m}(v) = & {C_{0}}_{m}^{T}H_{m}(v) +H_{m}^{T}(v)\mathbf{K}^{T} \int_{0}^{v}\tilde{\boldsymbol{\sigma }(C_{m})} H_{m}(u) \,du \\ &{}+H_{m}^{T}(v)\mathbf{R}^{T} \int_{0}^{v}\tilde{\boldsymbol{ \rho}(C_{m})} H_{m}(u)\,dB(u). \end{aligned}$$ Applying Lemmas 3.7 and 3.8, we get $$C_{m}^{T}H_{m}(v)={C_{0}}_{m}^{T}H_{m}(v)+H_{m}^{T}(v) \mathbf{K}^{T}\tilde{\boldsymbol{\sigma}(C_{m})} \boldsymbol{ \varLambda} H_{m}(v) +H_{m}^{T}(v) \mathbf{R}^{T} \tilde{\boldsymbol{\rho}(C_{m})}\boldsymbol{ \varLambda}_{B} H_{m}(v), $$ then by Lemma 3.4, we derive $$ C_{m}^{T}H(v)={C_{0}}_{m}^{T}H(v)+ \hat{\mathbf{A}}_{0}^{T}H(v)+\hat {\mathbf{B}}_{0}^{T}H(v), $$ where \(\mathbf{A}_{0}=\mathbf{K}^{T}\tilde{\boldsymbol{\sigma}(C_{m})} \boldsymbol{\varLambda}\) and \(\mathbf{B}_{0}=\mathbf{R}^{T} \tilde{\boldsymbol{\rho}(C_{m})} \boldsymbol{\varLambda}_{B}\). For nonlinear equation (18), a series of methods, such as simple trapezoid method, Simpson method, and Romberg method, are often introduced in the numerical analysis courses. In this paper, the function of fsolve in MATLAB is used to solve equation (18). In contrast to the articles [1, 3], we will give a strict and accurate error analysis in this section. Firstly, we recall two useful lemmas. Suppose that function \(x(u)\), \(u\in[0,1)\)satisfies the bounded condition and \(e(u)=x(u)-x_{m}(u)\), where \(x_{m}(u)\)is m approximations of HWs of \(x(u)\), then $$ \Vert e \Vert _{L^{2}([0,1))}^{2}= \int_{0}^{1}e^{2}(u)\,du \leq O \bigl(h^{2} \bigr). $$ Suppose that the function \(x(u,v)\)satisfying the bounded condition is defined on \(\mathbf{D}=[0,1)\times[0,1)\)and \(e(u,v)=x(u,v)-x_{m}(u,v)\), where \(x_{m}(u,v)\)is m approximations of HWs of \(x(u,v)\), then $$ \Vert e \Vert _{L^{2}(\mathbf{D})}^{2}= \int_{0}^{1} \int_{0}^{1}e^{2}(u,v)\,du\,dv \leq O \bigl(h^{2} \bigr). $$ Secondly, let \(e(v)=x(v)-x_{m}(v)\), where \(x_{m}(v)\), \(x_{0_{m}}(v)\), \(k_{m}(u,v)\), and \(r_{m}(u,v)\) are m approximations of Haar wavelets of \(x(v)\), \(x_{0}(v)\), \(k(u,v)\), and \(r(u,v)\), respectively. $$ \begin{aligned}[b]e(v) = {}& x(v)-x_{m}(v) \\ ={} & x_{0}(v)-x_{0_{m}}(v) \\ & + \int_{0}^{v} \bigl[{k(u,v)} {\sigma \bigl(x(u) \bigr)}-k_{m}(u,v){\sigma \bigl(x_{m}(u) \bigr)} \bigr]\,du \\ & + \int_{0}^{v} \bigl[{r(u,v)} {\rho \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\rho \bigl(x_{m}(u) \bigr)} \bigr] \,dB(u).\end{aligned} $$ Lastly, the main convergence theorem is proved. Suppose that analytic functionsσandρsatisfy the following conditions: \(|\sigma(x)-\sigma(y)| \leq l_{1}|x-y|\), \(| \rho(x)-\rho(y)|\leq l_{3} |x-y|\); \(|\sigma(x)|\leq l_{2}\), \(|\rho(y)|\leq l_{4}\); \(|k(u,v)|\leq l_{5}\), \(|r(u,v)|\leq l_{6}\), where \(x,y\in\mathbb{R}\), constant \(l_{i}>0\), \(i=1, 2, \ldots, 6\). Then $$\begin{aligned} \int_{0}^{T} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr)\,dv= \int _{0}^{T}\mathbb{E} \bigl( \bigl\vert x(v)-x_{m}(v) \bigr\vert ^{2} \bigr)\,dv\leq O \bigl(h^{2} \bigr), \quad T\in[0,1). \end{aligned}$$ For (21), we have $$\begin{aligned} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr) \leq& 3 \biggl[ \mathbb{E} \bigl( \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \bigr) \\ &{}+ \mathbb{E} \biggl( \biggl\vert \int_{0}^{v} \bigl({k(u,v)} {\sigma \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\sigma \bigl(x_{m}(u) \bigr)} \bigr)\,du \biggr\vert ^{2} \biggr) \\ &{}+ \mathbb{E} \biggl( \biggl\vert \int_{0}^{v} \bigl({r(u,v)} {\rho \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\rho \bigl(x_{m}(u) \bigr)} \bigr) \,dB(u) \biggr\vert ^{2} \biggr) \biggr]. \end{aligned}$$ On the basis of Lipschitz continuity, Itô isometry, and Cauchy–Schwarz inequality, it yields $$\begin{aligned} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr) \leq& 3 \biggl[ \mathbb{E} \bigl( \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \bigr) \\ &{}+ \mathbb{E} \biggl( \int_{0}^{v} \bigl\vert {k(u,v)} {\sigma \bigl(x(u) \bigr)}-{k_{m}(u,v)} {\sigma \bigl(x_{m}(u) \bigr)} \bigr\vert ^{2}\,du \biggr) \\ &{}+ \mathbb{E} \biggl( \int_{0}^{v} \bigl\vert {r(u,v)} {\rho \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\rho \bigl(x_{m}(u) \bigr)} \bigr\vert ^{2}\,du \biggr) \biggr] \\ = & 3 \biggl[\mathbb{E} \bigl( \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \bigr) \\ &{}+ \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert {k(u,v)} \bigl( \sigma \bigl(x(u) \bigr)-\sigma \bigl(x_{m}(u) \bigr) \bigr)\\ &{} + \sigma \bigl(x_{m}(u) \bigr) \bigl({k(u,v)}-{k_{m}(u,v)} \bigr) \bigr\vert ^{2} \bigr)\,du \\ &{}+ \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert {r(u,v)} \bigl(\rho \bigl(x(u) \bigr)-\rho \bigl(x_{m}(u) \bigr) \bigr) \\ &{}+\rho \bigl(x_{m}(u) \bigr) \bigl( {r(u,v)}-{r_{m}(u,v)} \bigr) \bigr\vert ^{2} \bigr)\,du \biggr] \\ \leq& 3 \biggl[ \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \\ &{}+ 2{l_{1}}^{2}{l_{5}}^{2} \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du +2{l_{2}}^{2} \int_{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du \\ &{}+ 2{l_{3}}^{2}{l_{6}}^{2} \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du +2{l_{4}}^{2} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du \biggr]. \end{aligned}$$ Then we can get $$\begin{aligned} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr) \leq{}& 3 \biggl[ \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2}+2{l_{2}}^{2} \int _{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du \\ & + 2{l_{4}}^{2} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du \biggr] \\ & +6 \bigl({l_{1}}^{2}{l_{5}}^{2}+{l_{3}}^{2}{l_{6}}^{2} \bigr) \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du,\end{aligned} $$ $$\begin{gathered} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr)\leq\beta(v)+\alpha \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du, \\ \beta(v)=3 \biggl[ \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2}\\ \phantom{\beta(v)=}{}+2{l_{2}}^{2} \int _{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du +2{l_{4}}^{2} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du \biggr], \\ \alpha=6 \bigl({l_{1}}^{2}{l_{5}}^{2}+{l_{3}}^{2}{l_{6}}^{2} \bigr).\end{gathered} $$ Let \(f(v)=\mathbb{E} ( |e_{m}(v) |^{2} )\), we get $$f(v)\leq\beta(v)+\alpha \int_{0}^{v}f(\tau)\,d\tau, \quad\tau\in[0,v). $$ By Gronwall's inequality, it follows that $$f(v)\leq\beta(v)+\alpha \int_{0}^{v}e^{\alpha(v-\tau)}\beta(\tau )\,d\tau, \quad v\in[0,1). $$ $$\begin{aligned}& \int_{0}^{T}f(v)\,dv \\& \quad = \int_{0}^{T}\mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr)\,dv \\& \quad \leq \int_{0}^{T} \biggl(\beta(v)+\alpha \int_{0}^{v}e^{\alpha (v-\tau)}\beta(\tau)\,d\tau \biggr)\,dv \\& \quad = \int_{0}^{T}\beta(v)\,dv+\alpha \int_{0}^{T} \int_{0}^{v}e^{\alpha (v-\tau)}\beta(\tau)\,d\tau \,dv \\& \quad \leq \int_{0}^{T}\beta(v)\,dv+\alpha e^{\alpha T} \int_{0}^{T} \int _{0}^{v}\beta(\tau)\,d\tau \,dv \\& \quad =3 \int_{0}^{T} \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2}\,dv +6{l_{2}}^{2} \int_{0}^{T} \int_{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du\,dv \\& \qquad{} + 6{l_{4}}^{2} \int_{0}^{T} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du\,dv \\& \qquad{} + \alpha e^{\alpha T} \biggl[3 \int_{0}^{T} \int_{0}^{v} \bigl\vert x_{0}( \tau)-x_{0_{m}}(\tau) \bigr\vert ^{2}\,d\tau \,dv\\& \qquad{} +6{l_{2}}^{2} \int_{0}^{T} \int_{0}^{t} \int_{0}^{\tau} \bigl\vert {k(s,\tau )}-{k_{m}(s, \tau)} \bigr\vert ^{2}\,ds\,d\tau \,dt \\& \qquad{} + 6{l_{4}}^{2} \int_{0}^{T} \int_{0}^{v} \int_{0}^{\tau} \bigl\vert {r(u, \tau)}-{r_{m}(u,\tau)} \bigr\vert ^{2}\,du\,d\tau \,dv \biggr] \\& \quad =3I_{1}+6l^{2}_{2}I_{2}+6l^{2}_{4}I_{3}+ \alpha e^{\alpha T} \bigl[3I_{4}+ 6l^{2}_{2}I_{5} + 6l^{2}_{4}I_{6} \bigr]. \end{aligned}$$ By using (19) and (20), we have $$I_{i} \leq w_{i}h^{2}, \quad i=1, 2, \ldots, 6. $$ So we can get $$\begin{aligned} \int_{0}^{T}\mathbb{E} \bigl\vert e_{m}(v) \bigr\vert ^{2}\,dv &\leq \bigl[ \bigl(3w_{1}+6{l_{2}}^{2}w_{2}+6{l_{4}}^{2}w_{3} \bigr)+\alpha e^{\alpha T} \bigl(3w_{4}+6{l_{2}}^{2}w_{5}+6{l_{4}}^{2}w_{6} \bigr) \bigr]h^{2}\\&\leq O \bigl(h^{2} \bigr),\end{aligned} $$ where constant \(w_{i}>0\), \(i=1, 2, \ldots, 6\). Numerical examples In this section, some examples are given to verify the validity and rationality of the above method. Example 6.1 Consider the nonlinear SIVIE [6, 8] $$ x(v)=x_{0}(v)-a^{2} \int_{0}^{v}x(u) \bigl(1-x^{2}(u) \bigr) \,du +a \int_{0}^{v} \bigl(1-x^{2}(u) \bigr) \,dB(u), \quad v\in[0,1), $$ $$ x(v)=\tanh \bigl(aB(v)+ \mathrm{arctanh}(x_{0}) \bigr). $$ In this example, \(a=\frac{1}{30}\) and \(x_{0}(v)=\frac{1}{10}\). The error means \(E_{m}\), error standard deviations \(E_{s}\), and confidence intervals of Example 6.1 for \(m=2^{4}\) and \(m=2^{5}\) are shown in Table 1 and Table 2, respectively. The error means \(E_{m}\) and error standard deviations \(E_{s}\) are obtained by 104 trajectories. Compared with Table 2 in [8], \(E_{m}\) is smaller and the confidence interval is smaller under the same confidence level. Moreover, the comparison of exact and approximate solutions of Example 6.1 for \(m=2^{4}\) and \(m=2^{5}\) is displayed in Fig. 1 and Fig. 2, respectively. \(m=2^{4}\), simulation result of the approximate solution and exact solution for Example 6.1 \(m =2^{5}\), simulation result of the approximate solution and exact solution for Example 6.1 Table 1 When \(m=2^{4}\), error means \(E_{m}\), error standard deviations \(E_{s}\), and confidence intervals are given in this table Table 2 When \(m =2^{5}\), error means \(E_{m}\), error standard deviations \(E_{s}\), and confidence intervals are given in this table Consider the nonlinear SIVIE [17, 18] $$ x(v)=1+ \int_{0}^{v}e^{-(v-u)}\sin \bigl(x(u) \bigr) \,du + \int_{0}^{v}e^{-(v-u)}\cos \bigl(x(u) \bigr) \,dB(u), \quad v\in[0,1). $$ The mean and approximate solutions of Example 6.2 for \(m=2^{4}\) and \(m=2^{5}\) are respectively given in Fig. 3 and Fig. 4, where the mean solution is obtained by 104 trajectories. \(m=2^{4}\), simulation result of the approximate solution and mean solution for Example 6.2 Mohammadi, F.: Numerical solution of stochastic Itô–Voltterra integral equations using Haar wavelets. Numer. Math., Theory Methods Appl. 9, 416–431 (2016) Article MathSciNet Google Scholar Reihani, M.H., Abadi, Z.: Rationalized Haar functions method for solving Fredholm and Volterra integral equations. J. Comput. Appl. Math. 200, 12–20 (2007) Maleknejad, K., Khodabin, M., Rostami, M.: Numerical solution of stochastic Volterra integral equation by a stochastic operational matrix based on block pulse function. Math. Comput. Model. 55, 791–800 (2012) Maleknejad, K., Basirat, B., Hashemizadeh, E.: Hybrid Legendre polynomials and block-pulse functions approach for nonlinear Volterra–Fredholm integro-differential equations. Comput. Math. Appl. 61, 2821–2828 (2011) Maleknejad, K., Khodabin, M., Rostami, M.: A numerical method for solving m-dimensional stochastic Itô–Volterra integral equations by stochastic operational matrix. Comput. Math. Appl. 63, 133–143 (2012) Ezzati, R., Khodabin, M., Sadati, Z.: Numerical implementation of stochastic operational matrix driven by a fractional Brownian motion for solving a stochastic differential equation. Abstr. Appl. Anal. 2014, Article ID 523163 (2014) Jiang, G., Sang, X.Y., Wu, J.H., Li, B.W.: Numerical solution of two-dimensional nonlinear stochastic Itô–Volterra integral equations by applying block pulse function. Adv. Pure Math. 9, 53–66 (2019) Hashemi, B., Khodabin, M., Maleknejad, K.: Numerical solution based on hat functions for solving nonlinear stochastic Itô–Volterra integral equations driven by fractional Brownian motion. Mediterr. J. Math. 14, 1–15 (2017) Heydari, M.H., Hooshmandasl, M.R., Ghaini, F.M.M., Cattani, C.: A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions. Mediterr. J. Math. 270, 402–415 (2014) Mirzaee, F., Hadadiyan, E.: Numerical solution of Volterra–Fredholm integral equations via modification of hat functions. Appl. Math. Comput. 280, 110–123 (2016) Balakumar, V., Murugesan, K.: Single-term Walsh series method for systems of linear Volterra integral equations of the second kind. Appl. Math. Comput. 228, 371–376 (2014) Blyth, W.F., May, R.L., Widyaningsih, P.: Volterra integral equations solved in Fredholm form using Walsh functions. ANZIAM J. 45, 269–282 (2003) Mohamed, D.S., Taher, R.A.: Comparison of Chebyshev and Legendre polynomials methods for solving two dimensional Volterra–Fredholm integral equations. J. Egypt. Math. Soc. 25, 302–307 (2017) Maleknejad, K., Sohrabi, S., Rostami, Y.: Numerical solution of nonlinear Volterra integral equations of the second kind by using Chebyshev polynomials. Appl. Math. Comput. 188, 123–128 (2007) Ezzati, R., Najafalizadeh, S.: Application of Chebyshev polynomials for solving nonlinear Volterra–Fredholm integral equations system and convergence analysis. Indian J. Sci. Technol. 5, 2060–2064 (2012) Asgari, M., Hashemizadeh, E., Khodabin, M., Maleknejad, K.: Numerical solution of nonlinear stochastic integral equation by stochastic operational matrix based on Bernstein polynomials. Bull. Math. Soc. Sci. Math. Roum. 57105, 3–12 (2014) Zhang, X.C.: Euler schemes and large deviations for stochastic Volterra equations with singular kernels. J. Differ. Equ. 244, 2226–2250 (2008) Zhang, X.C.: Stochastic Volterra equations in Banach spaces and stochastic partial differential equation. J. Funct. Anal. 258, 1361–1425 (2010) No availability of data and material. This article is funded by NSF Grants 11471105 of China and Innovation Team of the Educational Department of Hubei Province T201412. These supports are greatly appreciated. School of Mathematics and Statistics, Hubei Normal University, Huangshi, P.R. China Jieheng Wu, Guo Jiang & Xiaoyan Sang Jieheng Wu Guo Jiang Xiaoyan Sang The authors have made the same contribution. All authors read and approved the final manuscript. Correspondence to Guo Jiang. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Wu, J., Jiang, G. & Sang, X. Numerical solution of nonlinear stochastic Itô–Volterra integral equations based on Haar wavelets. Adv Differ Equ 2019, 503 (2019). https://doi.org/10.1186/s13662-019-2440-6 Stochastic integration operational matrixes Stochastic Itô–Volterra integral equations
CommonCrawl
On the incorporation of box-constraints for ensemble Kalman inversion FoDS Home Issues using logistic regression with class imbalance, with a case study from credit risk modelling December 2019, 1(4): 419-431. doi: 10.3934/fods.2019017 Quantum topological data analysis with continuous variables George Siopsis Department of Physics and Astronomy, The University of Tennessee, Knoxville, Tennessee 37996-1200, USA I introduce a continuous-variable quantum topological data algorithm. The goal of the quantum algorithm is to calculate the Betti numbers in persistent homology which are the dimensions of the kernel of the combinatorial Laplacian. I accomplish this task with the use of qRAM to create an oracle which organizes sets of data. I then perform a continuous-variable phase estimation on a Dirac operator to get a probability distribution with eigenvalue peaks. The results also leverage an implementation of continuous-variable conditional swap gate. Keywords: Quantum algorithm, quantum random access memory, continuous-variable quantum computation, topological data analysis, persistent homology, Vietoris-Rips complex, combinatorial Laplacian, Betti numbers. Mathematics Subject Classification: Primary: 62-07; Secondary: 81P68, 81P70. Citation: George Siopsis. Quantum topological data analysis with continuous variables. Foundations of Data Science, 2019, 1 (4) : 419-431. doi: 10.3934/fods.2019017 R. N. Alexander, S. C. Armstrong, R. Ukai and N. C. Menicucci, Noise analysis of single-mode Gaussian operations using continuous-variable cluster states, Phys. Rev. A, 90 (2014), 062324. Google Scholar S. Basu, On bounding the Betti numbers and computing the Euler characteristic of semi-algebraic sets, Discret. Comput. Geom., 22 (1999), 1–18. doi: 10.1007/PL00009443. Google Scholar S. Basu, Different bounds on the different Betti numbers of semi-algebraic sets, Discret. Comput. Geom., 30 (2003), 65–85. doi: 10.1007/s00454-003-2922-9. Google Scholar S. Basu, Computing the top Betti numbers of semialgebraic sets defined by quadratic inequalities in polynomial time, Found. Comput. Math., 8 (2008), 45–80. doi: 10.1007/s10208-005-0208-8. Google Scholar J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe and S. Lloyd, Quantum machine learning, Nature, 549 (2017), 195. Google Scholar S. L. Braunstein and P. van Loock, Quantum information with continuous variables, Rev. Mod. Phys., 77 (2005), 513–577. doi: 10.1103/RevModPhys.77.513. Google Scholar G. Carlsson, A. Zomorodian, A. Collins and L. Guibas, Persistence barcodes for shapes, Int. J. Shape Model, 11 (2005), 149. Google Scholar F. Chazal and A. Lieutier, Stability and computation of topological invariants of solids in $\mathbb R^n$, Discret. Comput. Geom., 37 (2007), 601–617. doi: 10.1007/s00454-007-1309-8. Google Scholar D. Cohen-Steiner, H. Edelsbrunner and J. Harer, Stability of persistence diagrams, Discret. Comput. Geom., 37 (2007), 103–120. doi: 10.1007/s00454-006-1276-5. Google Scholar F. De Martini, V. Giovannetti, S. Lloyd, L. Maccone, E. Nagali, L. Sansoni and F. Sciarrino, Quantum random access memory, Phys. Rev. A, 80 (2009), 010302. Google Scholar R. Dridi and H. Alghassi, Homology Computation of Large Point Clouds using Quantum Annealing, arXiv: 1512.09328. Google Scholar H. Edelsbrunner, D. Letscher and A. Zomorodian, Topological Persistence and Simplification, Discret. Comput. Geom., 28 (2002), 511–533. doi: 10.1007/s00454-002-2885-2. Google Scholar P. Frosini and C. Landi, Size theory as a topological tool for computer vision, Pattern Recognit. Image Anal., 9 (1999), 596. Google Scholar R. Ghrist, Barcodes: The persistent topology of data, Bull. Am. Math. Soc. (N.S.), 45 (2008), 61–75. doi: 10.1090/S0273-0979-07-01191-3. Google Scholar V. Giovannetti, S. Lloyd and L. Maccone, Quantum random access memory, Phys. Rev. Lett., 100 (2008), 160501, 4pp. doi: 10.1103/PhysRevLett.100.160501. Google Scholar V. Giovannetti, S. Lloyd and L. Maccone, Architectures for a quantum random access memory, Phys. Rev. A, 78 (2008), 052310. Google Scholar L. K. Grover, A fast quantum mechanical algorithm for database search, Annual ACM Symposium on the Theory of Computing, (1996), 212–219. doi: 10.1145/237814.237866. Google Scholar M. Gu, C. Weedbrook, N. C. Menicucci, T. C. Ralph and P. van Loock, Quantum computing with continuous-variable clusters, Phys. Rev. A, 79 (2009), 062318. Google Scholar S. Harker, K. Mischaikow, M. Mrozek and V. Nanda, Discrete Morse theoretic algorithms for computing homology of complexes and maps, Found. Comput. Math., 14 (2014), 151–184. doi: 10.1007/s10208-013-9145-0. Google Scholar H. L. Huang, X. L. Wang, P. P. Rohde, Y. H. Luo, Y. W. Zhao, C. Liu, L. Li, N. L. Liu, C. Y. Lu and J. W. Pan, Demonstration of topological data analysis on a quantum processor, Optica, 5 (2018), 193. Google Scholar D. Kozlov, Combinatorial Algebraic Topology, , Algorithms and Computation in Mathematics, 21. Springer, Berlin, 2008. doi: 10.1007/978-3-540-71962-5. Google Scholar H. K. Lau, R. Pooser, G. Siopsis and C. Weedbrook, Quantum machine learning over infinite dimensions, Phys. Rev. Lett., 118 (2017), 080501, 6pp. doi: 10.1103/PhysRevLett.118.080501. Google Scholar N. Liu, J. Thompson, C. Weedbrook, S. Lloyd, V. Vedral, M. Gu and K. Modi, Power of one qumode for quantum computation, Phys. Rev. A, 93 (2016), 052304, 10pp. doi: 10.1103/physreva.93.052304. Google Scholar S. Lloyd, Hybrid quantum computing, preprint, arXiv: quant-ph/0008057. Google Scholar S. Lloyd and S. L. Braunstein, Quantum computation over continuous variables, Phys. Rev. Lett., 82 (1999), 1784–1787. doi: 10.1103/PhysRevLett.82.1784. Google Scholar S. Lloyd, S. Garnerone and P. Zanardi, Quantum algorithms for topological and geometric analysis of data, Nat. Commun., 7 (2016), 10138. Google Scholar P. van Loock, C. Weedbrook and M. Gu, Building Gaussian cluster states by linear optics, Phys. Rev. A, 76 (2007), 032321. Google Scholar K. Marshall, C. S. Jacobsen, C. Schafermeier, T. Gehring, C. Weedbrook and U. L. Andersen, Continuous-variable quantum computing on encrypted data, Nat. Comm., 7 (2016), 13795. Google Scholar N. C. Menicucci, Fault-tolerant measurement-based quantum computing with continuous-variable cluster states, Phys. Rev. Lett., 112 (2014), 120504. Google Scholar N. C. Menicucci, P. van Loock, M. Gu, C. Weedbrook, T. C. Ralph and M. A. Nielsen, Universal quantum computation with continuous-variable cluster states, Phys. Rev. Lett., 97 (2006), 110501. Google Scholar K. Mischaikow and V. Nanda, Morse theory for filtrations and efficient computation of persistent homology, Discret. Comput. Geom., 50 (2013), 330–353. doi: 10.1007/s00454-013-9529-6. Google Scholar M. A. Nielson and I. L. Chuang, Quantum Computation and Quantum Information, , Cambridge University Press, 2000. Google Scholar P. Niyogi, S. Smale and S. Weinberger, A topological view of unsupervised learning from noisy data, SIAM J. Comput., 40 (2011), 646–663. doi: 10.1137/090762932. Google Scholar M. Pysher, Y. Miwa, R. Shahrokhshahi, R. Bloomer and O. Pfister, Parallel generation of quadripartite cluster entanglement in the optical frequency comb, Phys. Rev. Lett., 107 (2011), 030505. Google Scholar H. Reitberger, Leopold Vietoris (1891–2002),, Notices of the American Mathematical Society, 49 (2002), 1232-1236. Google Scholar V. Robins, Towards computing homology from finite approximations, Topol. Proc., 24 (1999), 503–532. Google Scholar S. Takeda, T. Mizuta, M. Fuwa, J.-i. Yoshikawa, H. Yonezawa and A. Furusawa, Generation and eight-port homodyne characterization of time-bin qubits for continuous-variable quantum information processing, Phys. Rev. A, 87 (2013), 043803. Google Scholar C. Weedbrook, S. Pirandola, R. Garcia-Patron, N. J. Cerf, T. C. Ralph, J. H. Shapiro and S. Lloyd, Gaussian quantum information, Rev. Mod. Phys., 84 (2012), 621. Google Scholar C. R. Wie, A Quantum Circuit to Construct All Maximal Cliques Using Grover Search Algorithm, arXiv: 1711.06146. Google Scholar S. Yokoyama, R. Ukai, S. C. Armstrong, J.-i. Yoshikawa, P. van Loock and A. Furusawa, Demonstration of a fully tunable entangling gate for continuous-variable one-way quantum computation, Phys. Rev. A, 92 (2015), 032304. Google Scholar S. Yokoyama, R. Ukai, S. C. Armstrong, C. Sornphiphatphong, T. Kaji, S. Suzuki, J. i. Yoshikawa, H. Yonezawa, N. C. Menicucci and A. Furusawa, Ultra-large-scale continuous-variable cluster states multiplexed in the time domain, Nature Photonics, 7 (2013), 982. Google Scholar J. i. Yoshikawa, S. Yokoyama, T. Kaji, C. Sornphiphatphong, Y. Shiozawa, K. Makino and A. Furusawa, Generation of one-million-mode continuous-variable cluster state by unlimited time-domain multiplexing, Applied Phys. Lett. Photonics, 1 (2016), 060801. Google Scholar J. Zhang and S. L. Braunstein, Continuous-variable Gaussian analog of cluster states, Phys. Rev. A, 73 (2006), 032318. Google Scholar A. Zomorodian and G. Carlsson, Computing persistent homology, Discret. Comput. Geom., 33 (2005), 249–274. doi: 10.1007/s00454-004-1146-y. Google Scholar A. Zomorodian, Algorithms and Theory of Computation Handbook, 2nd edition, Ch. 3, section 2., Chapman and Hall/CRC, 2009. Google Scholar Figure 1. The Betti numbers $ \beta_{0,1,2} $ for four example shapes (point, cirlce, spherical shell, and torus). They are the number of connected components, one-dimensional holes (also called tunnels or handles), and two-dimensional voids, respectively Figure 2. (a) Given data represented by points. (b) For a given distance $ \varepsilon $, a circle is drawn around each point. (c) Between every two points with contacting circles a line is drawn. These connections are edges of $ n $-dimensional shapes (simplices), and the space of simplices in (c) is called a simplicial complex. For two different values of $ \varepsilon $, as in (b) i, ii, and (c) i, ii, one can get more or less connections between the data points resulting in different topologies. Therefore Betti numbers depend on the initial choice of $ \varepsilon $. It is useful to vary $ \varepsilon $ to find interesting structures Figure 3. The $ k $-simplices for $ k = 0,1,2,3 $. These are a vertex, an edge, a triangle, and a tetrahedron, respectively Figure 4. The action of the boundary operator is shown on a $ k = 2 $ simplex. A visual representation of a simplex being broken down into its boundary is depicted above. Its boundary consists of simplices of $ k-1 = 1 $. Below is the encoded representation of the boundary operator acting on the 2-simplex. In this encoding a 1 represents a vertex in the corresponding position in the string of bits. The boundary sum is represented by a clockwise rotation around the original simplex, and the negative sign in the result alternates as in Eq. (5) Figure 5. Consider the $ k = 2 $ complex on the left, for a given value of $ \varepsilon $. In order to show that the striped area is a void, it itself must be boundary-less, and not a boundary for any part of the complex. Fulfillment of these two properties is equivalent to the combinatorial Laplacian (11) applied to the stripped area returning zero. Therefore this area would be part of the kernel of the combinatorial Laplacian for $ k = 2 $ contributing to the $ \beta_{2} $ Betti number Brendan Weickert. Infinite-dimensional complex dynamics: A quantum random walk. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 517-524. doi: 10.3934/dcds.2001.7.517 Helmut Kröger. From quantum action to quantum chaos. Conference Publications, 2003, 2003 (Special) : 492-500. doi: 10.3934/proc.2003.2003.492 Alberto Ibort, Alberto López-Yela. Quantum tomography and the quantum Radon transform. Inverse Problems & Imaging, 2021, 15 (5) : 893-928. doi: 10.3934/ipi.2021021 John Erik Fornæss. Infinite dimensional complex dynamics: Quasiconjugacies, localization and quantum chaos. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 51-60. doi: 10.3934/dcds.2000.6.51 Paolo Antonelli, Pierangelo Marcati. Quantum hydrodynamics with nonlinear interactions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 1-13. doi: 10.3934/dcdss.2016.9.1 Luiza H. F. Andrade, Rui F. Vigelis, Charles C. Cavalcante. A generalized quantum relative entropy. Advances in Mathematics of Communications, 2020, 14 (3) : 413-422. doi: 10.3934/amc.2020063 Gabriel Rivière. Remarks on quantum ergodicity. Journal of Modern Dynamics, 2013, 7 (1) : 119-133. doi: 10.3934/jmd.2013.7.119 Sergei Avdonin, Pavel Kurasov. Inverse problems for quantum trees. Inverse Problems & Imaging, 2008, 2 (1) : 1-21. doi: 10.3934/ipi.2008.2.1 Dmitry Jakobson. On quantum limits on flat tori. Electronic Research Announcements, 1995, 1: 80-86. James B. Kennedy, Jonathan Rohleder. On the hot spots of quantum graphs. Communications on Pure & Applied Analysis, 2021, 20 (9) : 3029-3063. doi: 10.3934/cpaa.2021095 Jin-Cheng Jiang, Chi-Kun Lin, Shuanglin Shao. On one dimensional quantum Zakharov system. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5445-5475. doi: 10.3934/dcds.2016040 Dubi Kelmer. Quantum ergodicity for products of hyperbolic planes. Journal of Modern Dynamics, 2008, 2 (2) : 287-313. doi: 10.3934/jmd.2008.2.287 Ruikuan Liu, Tian Ma, Shouhong Wang, Jiayan Yang. Thermodynamical potentials of classical and quantum systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1411-1448. doi: 10.3934/dcdsb.2018214 Mason A. Porter, Richard L. Liboff. The radially vibrating spherical quantum billiard. Conference Publications, 2001, 2001 (Special) : 310-318. doi: 10.3934/proc.2001.2001.310 Jianhong (Jackie) Shen, Sung Ha Kang. Quantum TV and applications in image processing. Inverse Problems & Imaging, 2007, 1 (3) : 557-575. doi: 10.3934/ipi.2007.1.557 Huai-Dong Cao and Jian Zhou. On quantum de Rham cohomology theory. Electronic Research Announcements, 1999, 5: 24-34. Philipp Fuchs, Ansgar Jüngel, Max von Renesse. On the Lagrangian structure of quantum fluid models. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1375-1396. doi: 10.3934/dcds.2014.34.1375 Leonid Faybusovich, Cunlu Zhou. Long-step path-following algorithm for quantum information theory: Some numerical aspects and applications. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021017 Sen Zhang, Guo Zhou, Yongquan Zhou, Qifang Luo. Quantum-inspired satin bowerbird algorithm with Bloch spherical search for constrained structural optimization. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3509-3523. doi: 10.3934/jimo.2020130 Pedro Branco. A post-quantum UC-commitment scheme in the global random oracle model from code-based assumptions. Advances in Mathematics of Communications, 2021, 15 (1) : 113-130. doi: 10.3934/amc.2020046
CommonCrawl
how to remember the sign of the cross A sign line shows the signs of the different factors in each interval. Genuflecting: Another telltale sign of a Catholic is genuflection, which is touching the right knee to the floor while bending the left knee. Indulgences are given of 50 days for making the Sign of the Cross saying the words, and 100 days for the same when using holy water. Don't worry if … There's no feeling to pin point. Byzantine Catholics make a similar sign of the cross but go to the right shoulder first and then to the left. Amen. A Roman soldier becomes torn between his love for a Christian woman and his loyalty to Emperor Nero. We are to take off the old self and put on the self "which is being renewed … in the image of its creator," Paul tells us. Top Tip: In the Name of the Father, Bottom Tip: the Son, Right Tip: and the Holy, Left Tip: Spirit. if the tongue is 'crooked', if it goes to one side or the other, that is also an indication of a stroke. St Columbans' children's prayer book, Making the Sign of the Cross is now available to order! But what exactly are we doing when we make the Sign of the … 21 Benefits of Making the Sign of the Cross Read More » In using the same words with which we were baptized, the Sign of the Cross is a "summing up and re-acceptance of our baptism," according to then-Cardinal Joseph Ratzinger. This too is an apt metaphor for the Christian life: while we can be compared to sheep in the sense of following Christ as our shepherd we are not called to be sheepish. Look for apps or services that have the Cross-Account Protection badge . Thanks for the support! The Cross is personal. But what exactly are we doing when we make the Sign of the Cross? The eunuch went to what is now Sudan. This rejection fails to recognize that the sign of the cross is an ancient and scriptural help to Christians. Cross streets. It is traced by the Church on the forehead of the catechumen before baptism. A we bow down and make the sign of the cross 65 verse. Many place the symbol of the cross upon their Bibles, pulpits, steeples, and car bumpers. Prayer: Lord Jesus, when I am afflicted by the cross, help me remember that it is you and me, together, arms around each other, carrying the cross up the hill. The Sign of the Cross is a simple gesture yet a profound expression of faith for both Catholic and Orthodox Christians. If the light is completely inoperative, treat it as a 4-way stop. However I should mention that the Bible does not forbid a Christian to make the sign of the cross, meaning that when someone makes the sign of the cross he does not commit a sin. I had intended to write on a different Scripture matter, but this verse kept coming to my thoughts. The Sign of the Cross immediately focuses us on the true God, according to Ghezzi: "When we invoke the Trinity, we fix our attention on the God who made us, not on the God we have made. See more ideas about stations of the cross… Another 'sign' of a stroke is this: Ask the person to 'stick' out their tongue. Sign of the cross is within the scope of WikiProject Catholicism, an attempt to better organize and improve the quality of information in articles related to the Catholic Church.For more information, visit the project page. Others raise the index and middle fingers together, symbolizing Christ's divine and human natures. Toggle navigation. As your hand moves from your forehead down to your chest, it passes your sensory organs: your … Here are a few rules you should follow: When crossing an intersection without a stop or yield sign, decrease your speed and be ready to stop if necessary. We should begin our confession in this manner: Entering the confessional, we kneel, and making the sign of the cross we say to the priest: "Bless me, Father, for I have sinned"; and then we tell how long it has been since our last confession. Ask God to sanctify your mind and your thoughts. Pages 324. God the Father at the head, Jesus at the foot of the cross/the ground/on earth, The Holy Spirit bridging the gap between heaven and earth. Freeway entrances. Alzheimer's is evil. In first lifting our hand to our forehead we recall that the Father is the first person the Trinity. There are slight differences in how it is made between the various rites of … [Changed to a simple partial indulgence, … Catholic Exchange is a project of Sophia Institute Press. Praying the Sign of the Cross traces its origins back to the very early Christians marking a cross on their foreheads. In Scripture, God's name carries power. Catholic children are frequently exposed to the Sign of the Cross but often have a hard time figuring out the proper sequence (at least my … Whoever wishes to follow Christ "must deny himself" and "take up his cross" as Jesus told the disciples in Matthew 16:24. "I have been crucified with Christ," St. Paul writes in Galatians 2:19. "Proclaiming the sign of the cross proclaims our yes to this condition of discipleship," Ghezzi writes. The routine at Mass of making the small Sign of the Cross on our … If you have to cross multiply. In Colossians 3, St. Paul uses the image of clothing to describe how our sinful natures are transformed in Christ. Let's examine these both in more detail. … Loss is something so complex and I go thru spells where I cant stop crying and then I laugh about all the memories and then sit quiet and try to comprehend what happened. Don't worry if it's hard at first. Most Christians remember how Philip met an Ethiopian eunuch on the road to Gaza, but most Christians don't know how this is directly related to the sign of the cross. For zodiac signs who don't text back right away, timely replies simply aren't a priority. As Catholics, it's something we do when we enter a church, after we receive Communion, before meals, and every time we pray. A. In lowering our hand we "express that the Son proceeds from the Father." And, in ending with the Holy Spirit, we signify that the Spirit proceeds from both the Father and the Son, according to Francis de Sales. As St. Paul wrote in Ephesians 6, "Put on the armor of God so that you may be able to stand firm against the tactics of the devil. The novel's protagonist, whose journey from the Little Town where she lives across the border in search of her brother forms the backbone of the narrative. (By the sign of the Holy Cross, From our enemies, deliver us Lord God, Father, in the name of the Father, the Son and the Holy Spirit.) To clear the airway of a choking infant younger than age 1: Assume a seated position and hold the infant facedown on your forearm, which is resting on your thigh. The sign of the cross has been made before and after … The Stations of the Cross are also know as "Way of Sorrows" or simply "The Way". Pray. It marks the soul as … Yet most Protestants reject the idea of placing the sign of the cross upon themselves. Objective to find visual and accessible ways to remember this formula fast $$(x,y,z)\times(u,v,w)=(yw-zv,zu-xw,xv-yu)$$ I have used Sarrus' rule but it is slow, more here.Since it is slow, I have tried to find alternative ways such as binary-tree -visualization (but it is poor/slow until some clever ideas): Support the infant's head and neck with your hand, and place the head lower than the trunk. Remember to always come to a complete stop at a stop sign or blinking red light. Log in, If the kids aren't old enough to write it yet, I have some great. You may need to sign in. We begin and end our prayers with the Sign of the Cross, perhaps not realizing that the sign is … The movement from left to right also signifies our future passage from present misery to future glory just as Christ "crossed over from death to life and from Hades to Paradise," Pope Innocent II wrote. In Philippians 2:10, St. Paul tells us that "at the name of Jesus every knee should bend, of those in heaven and on earth and under the earth." And, in John 14:13-14, Jesus Himself said, "And whatever you ask in my name, I will do, so that the Father may be glorified in the Son. We instead are called to be soldiers of Christ. When you make the sign, you are professing a mini version of the creed — you are professing your belief in the Father, and in the Son and in the Holy Spirit. Iron MaidenThe X Factor℗ 1995, 2015 Iron Maiden LLP under exclusive license to … Next, find the pattern you're looking for: xy => z (x cross y is z) yz => x (y cross z is x; we looped around: y to z to x) zx => y; Now, xy and yx have opposite signs because they are forward and backward in our xyzxyz setup. … I ask all things in Their name and receive all things based on God's will. 2. cross, The thumb is often bent to touch the ring finger in the two-finger position Now we recreate your love we Bring the bread and wine to share a meal; Sign of grace and mercy, the presence … He has appeared on Fox News, C-SPAN and the Today Show and his writing has been published in the Washington Times, Providence Journal, the National Catholic Register and on MSNBC.com and ABCNews.com. He concludes that we can view the Sign of the Cross as "our way of participating in Christ's stripping at the Crucifixion and his being clothed in glory at his resurrection." Thus, in making the Sign of the Cross, we are radically identifying ourselves with the entirety of the crucifixion event—not just those parts of it we can accept or that are palatable to our sensibilities. The Sign of the Cross is a guide that conducts us — Necessity of a guide — State of man here below — The Sign of the Cross conducts man to his end by remembrance, and by imitation — Remembrance which it recalls — General remembrance — Particular We bow down and make the sign of the cross As a sacramental, the Sign of the Cross prepares us for receiving God's blessing and disposes us to cooperate with His grace, according to Ghezzi. The Sign of the Cross is a simple gesture yet a profound expression of faith for both Catholic and Orthodox Christians. This remembrance is deepened if we keep our right hand open, using all five fingers to make the sign—corresponding to the Five Wounds of Christ. Create a sign line to show where the expression in the inequality is positive or negative. As I prayed this morning I suddenly had a strong remembrance of the cross of our Lord. Dedicated to the Holy Cross of our Lord Jesus Christ. In Cruce Salus.-=:†:=-I was baptised on the feast of St John Joseph of the Cross in the Church of the Holy Cross, with my patroness being St Jeanne Jugan, also known as Marie de la Croix. After reviewing the words and meaning of the Sign of the Cross, tell the children they will be creating necklaces to help them remember the hand gestures that accompany the Sign of the Cross. Most Christians remember how Philip met an Ethiopian eunuch on the road to Gaza, but most Christians don't know how this is directly related to the sign of the cross. In making the Sign of the Cross, we mark ourselves as belong to Christ, our true shepherd. Tagged as: As a gesture often made in public, the Sign of the Cross is a simple way to witness our faith to others. The Sign of the Cross has a special connection to baptism. Side streets. Many worshippers make the sign of the cross with an open hand, their five fingers reminding them of the five wounds of Christ. But, if he do, the fiend will soon be frightened on account of the victorious token." In another statement, attributed to St. John Chrysostom, demons are said to "fly away" at the Sign of the Cross "dreading it as a staff that they are beaten with." (Source: Catholic Encyclopedia.). I came out of the bathroom draped in a towel after a failed attempt to take a shower. The Trinity The prayers that the priest says silently to himself before and after proclaiming the Gospel can give us a clue. The Sign of the Cross should always be used before our chief actions and undertakings in order to sanctify them and obtain God's blessing. The same way, it's not right to persecute the ones who refuse to make the sign of the cross or to force them to do so. These are the core of why Catholics do the sign of the cross. Be the Cross our seal made with boldness by our fingers on our brow, and on everything; over the bread we eat, and the cups we drink; in our comings in, and goings out; before our sleep, when we lie down and when we rise up; when we are in the way, and when we are still," wrote St. Cyril of Jerusalem. In affirming our belief in the Incarnation, the crucifixion, and the Trinity, we are making a sort of mini-confession of faith in words and gestures, proclaiming the core truths of the creed. Its cruel. It's not you, it's them (and their totally inability to text their boo back in a reasonable amount of time). Technically, the sign of the cross is a sacramental, a sacred sign instituted by the Church which prepares a person to receive grace and which sanctifies a moment or circumstance. Therefore, the sign in the heavens that is coming this week is a sign of the birth of the New Jerusalem as well as the second appearance of Christ as the King of Righteousness and the King of Salem (Hebrews 7:1, 2). Directed by Cecil B. DeMille. sacred sign of the Cross shall appear in the heavens, to be recognized by the elect with thankfulness and love, and by the reprobate with fear and trembling; for then shall it be the disciples of the Cross, and none but they, whom He will acknowledge for His Own. This formula is not as difficult to remember as it might at first appear to be. Before we pray we make the sign of the cross! After telling the time of our last confession, what do we confess? Columban e-Bulletin. Follow him on Twitter at https://twitter.com/StephenBeale1, Keep Christ at the Center of Your Christmas. The Church Fathers saw a connection between this verse and the stripping of Christ on the cross, "teaching that stripping off our old nature in baptism and putting on a new one was a participation in Christ's stripping at his crucifixion," Ghezzi writes. Amen. In other words, the Sign of the Cross commits us, body and soul, mind and heart, to Christ. Pass the paint around to each child. This preview shows page 64 - 69 out of 324 pages. Follow the drawing above and ask someone to help you. The sign of the cross is made simultaneously with this gesture. Technically, the sign of the cross is a sacramental, a sacred sign instituted by the Church which prepares a person to receive grace and which sanctifies a moment or circumstance. How to make the Sign of the Cross? Our movement is downward, from our foreheads to our chest "because Christ descended from the heavens to the earth," Pope Innocent III wrote in his instructions on making the Sign of the Cross. Instruct the children to glue the sticks together in the shape of a cross. When this happens, our prayer becomes more about us than an encounter with the living God. On the Sign of the Cross. If the expression is factored, show the signs of the individual factors. sign of the Cross, Stephen Beale is a freelance writer based in Providence, Rhode Island. How to Make the Sign of the Cross. Use a mirror! If prayer, at its core, is "an uprising of the mind to God," as St. John Damascene put it, then the Sign of the Cross assuredly qualifies. The sign of the cross is a mark of discipleship. Her quest not only directly represents the difficult, perilous trip so many make every day from Mexico to the United States, but also adapts the traditional mythological story of … All rights reserved. The products I link to are all things that I either have, or wish that I had, and all opinions shared on this blog are my own. The sign of the cross reflects biblical reality. Tidbits . We fling our images aside and address our prayers to God as he has revealed himself to be: Father, Son, and Holy Spirit.". The formal and proper form of the sign of the cross includes the use of three fingers, especially when entering the church. © Copyright 2020 Catholic Exchange. 428. This is also reinforced by using three fingers to make the sign, according to Pope Innocent III. The sign of the cross is: a confession of faith; a renewal of baptism; a mark of discipleship; an acceptance of suffering; a defense against the devil; and a victory over self-indulgence. These apply when the sign is illuminated, or during the time stated on the sign, or at all times if no time is shown. To make the sign of the cross one may observe this helpful illustrated example found in a most recent and delightful publication from CPH. The sign of the cross was made simply with the fingers (the index or the thumb) on the forehead or lips or breast (as Latin-rite Catholics do at the beginning of the Gospel lesson) or with the whole hand over the torso. Surprisingly, though, there is nothing in the rubrics about the laypeople making this sign. In crossing our shoulders we ask God "to support us—to shoulder us—in our suffering," Ghezzi writes. Share the best GIFs now >>> Fundamentally, in tracing out the outlines of a cross on ourselves, we are remembering Christ's crucifixion. This means if you click them and purchase something, I get a small commission. prayer, A We bow down and make the sign of the cross 65 VERSE TO REMEMBER Oh come let. The Sign of the Cross recalls the forgiveness of sins and the reversal of the Fall by passing "from the left side of the curse to the right of blessing," according to de Sales. If an app or service has the badge, it's participating in Cross-Account Protection. The set includes some common Australian road signs, traffic lights, pedestrian crossing that fits on the printable roads, and some paper sign poles to hold up the signs. Add some road signs and people and let your child explore and play. Before we pray we make the sign of the cross! The physical actions are explained as follows: "Touch your head at the naming of the Father; then bring your hand to the middle of your chest (over your heart) at the naming of the … The eunuch believed and was baptised, and the two went their separate ways (Acts 8:26-39). He came and gave his life for them. A. To suffer and to do. His areas of interest include Eastern Christianity, Marian and Eucharistic theology, medieval history, and the saints. Determine the solution, writing it in inequality notation or interval notation. Suppose That You Cross The Circuit Elements Shown In The Figure Below From Left To Right. …Crossing yourself is a very Christian symbol – it is a physical sign we make. Question: (4) It Is Important To Remember The Sign Convention Used For Kirchhoff's Loop Rule. The eunuch believed and was baptised, and the two went their separate ways (Acts 8:26-39). "Let us not then be ashamed to confess the Crucified. "For example, a shepherd marked his sheep as his property with a brand that he called a sphragis," Ghezzi writes. "Let it take in your whole being—body, soul, mind, will, thoughts, feelings, your doing and not-doing—and by signing it with the cross strengthen and consecrate the whole in the strength of Christ, in the name of the triune God," said twentieth century theologian Romano Guardini. Raised as an evangelical Protestant, he is a convert to Catholicism. 1\2 and 3\4 , remember the top numbers of the fractions are numerators and the bottom numbers are denominators. jesus, First you multiply 1 and 4 which will give you your new denominator, then you multiply 2 and 3 to get the … We in fact, make ourselves, open and willing channels for Cosmic Wisdom, Universal Love, and Creative Power to manifest in our lives and the world. Normal or illuminated signs may indicate that either right or left turns are prohibited. Scroll down to the "Signing in with Google" section. School Wesley Chapel High School; Course Title ENGL 1101-1102; Uploaded By MagistrateCrown1200. We created some free printable Australian road signs and accessories to go with our printable road set. The sign of the cross is a beautiful gesture which reminds the faithful of the cross of salvation while invoking the Holy Trinity. For Roman Catholics the sign of the cross is made using your right hand, you should touch your forehead at the mention of the Father; the lower middle of your chest at the mention of the Son; and the left shoulder on … I noticed in reference to the Sign of the Cross, you state a "mistake" many children make is touching the right shoulder before the left.Is that not the way it was originally done and is still done in the Eastern Catholic communities? It is a sign that this life has been brought under the shadow of Christ's work of redemption. Choose The Correct Sign (+or-) For The Potential Difference For The Four Cases. Catholic video that teaches children the Sign of the Cross. Warning Signs These signs warn you that you are approaching an unexpected, hazardous or unusual feature on the road ahead. We adore you, O Christ, and we bless you … because by your holy cross you have … The practice has developed over time and is now a regular practice for over a billion people around the world. As you touch your forehead in the Sign of the Cross, remember the anointing you received in your Baptism. Do you know how to make the sign of the cross? The Sign of the Cross. I draw the Holy Sign, All good thoughts stir within me, and renew. The Sign of the Cross also gives us a way to express our belief in Jesus' death and our hope in the Resurrection. We embrace the cross of Jesus and express our willingness to take up our own cross, all the while bursting with joyful hope in the Resurrection. The sign links you to the body of Christ, and when you make it you remember your joining to the body with Christ as the head. With Fredric March, Claudette Colbert, Elissa Landi, Charles Laughton. On the Sign of the Cross. By using the sign of the cross in a conscious manner, we can create within ourselves a condition that is supportive of mystical experiences and expanded awareness. If you would like to teach your kids more Catholic prayers by writing them out, you should definitely take a look at my ebook for kids, Some of the links in my posts are affiliate links. Note that some people end the Sign by crossing the thumb over the index finger to make a cross, and then kissing the thumb as a way of "kissing the Cross." The sign of the cross, in words and in action, reminds us of the two central realities of our faith: who God is (the Trinity) and what God has done for us (the Cross). As Catholics, it's something we do when we enter a church, after we receive Communion, before meals, and every time we pray. ... Notice that switching the order of the vectors in the cross product … (I'm paraphrasing this Russian Orthodox writer.) The sphragis was also the term for a general's name that would be tattooed on his soldiers, according to Ghezzi. Password. Do you know how to make the sign of the cross? First, the terms alternate in sign and notice that the 2x2 is missing the column below the standard basis vector that multiplies it as well as the row of standard basis vectors. It has been important since the early church. Holding two fingers together—either the thumb with the ring finger or with index finger—also represents the two natures of Christ. Email address. touch the left shoulder, then right shoulder, as you say "et Spiritus Sancti" ("and of the Holy Ghost"). Ask them to repeat the words to the Sign of the Cross while placing a circle of paint (like a jewel) at each tip of their crosses. The Sixth Station. Jesus still loves sinners. He is a former news editor at GoLocalProv.com and was a correspondent for the New Hampshire Union Leader, where he covered the 2008 presidential primary. Before the gospel we usually just do the signs of the cross without the prayer. Jesus says in Luke 9:23, "If any man will come after me, let him deny himself, and take up his cross daily, and follow me." Follow the drawing above and ask someone to help you. The sign of the cross was made from forehead to chest, and then from right shoulder to left shoulder with the right hand. Mar 1, 2015 - Lent is here and why not start this Lenten Season with Stations of the Cross (Kurishinte Vazhi). The message of the Cross remains a gift of love to those undeserving. In ancient Greek, the word for sign was sphragis, which was also a mark of ownership, according to Ghezzi. This frugal and fun craft is an easy way to teach children how to do the Sign of the Cross. The signum crucis, the sign of the cross, is powerful because it marks us as children of God who have thrown off the slavery of Satan and embraced the Cross of Christ as the way to salvation.The Cross destroyed death and hell, and through it, Jesus redeemed the world. "No empty gesture, the sign of the cross is a potent prayer that engages the Holy Spirit as the divine advocate and agent of our successful Christian living," writes Bert Ghezzi. In moving our hands from our foreheads to our hearts and then both shoulders, we are asking God's blessing for our mind, our passions and desires, our very bodies. In the New Testament, the word sphragis, mentioned above, is also sometimes translated as seal, as in 2 Corinthians 1:22, where St. Paul writes that, "the one who gives us security with you in Christ and who anointed us is God; he has also put his seal upon us and given the Spirit in our hearts as a first installment." In making the Sign of the Cross, we are once again sealing ourselves in the Spirit, invoking His powerful intervention in our lives. Making the triple Sign of the Cross like this before the Gospel is a longstanding tradition. Subscribe to our monthly enews, Columban eBulletin and keep up to date with Columban mission news and stories. With Tenor, maker of GIF Keyboard, add popular Sign Of The Cross animated GIFs to your conversations. Their slumbering strength divine; Till there springs up a courage high and true. Roundabouts; Since your chances of a collision increase in an intersection, it's important to proceed with caution. He welcomes tips, suggestions, and any other feedback at bealenews at gmail dot com. Making the sign of the cross helps us to remember that we are coming to God to talk with Him and ask His blessing on what we do or say. To remember the right hand rule, write the xyz order twice: xyzxyz. Have each child put a dab of glue on the middle of one on the tongue depressors. For a blinking yellow light, proceed slowly and with caution. … take the helmet of salvation and the sword of the Spirit, which is the word of God.", The Sign of the Cross is one of the very weapons we use in that battle with the devil. On August 21 we saw a solar eclipse pass over seven cities called "Salem" from the west coast to the … Taking away someones ability to remember memories and then how to function like a human … My mom uses the full prayer as a blessing for my dad before he goes to work every morning and for me … One of the temptations in prayer is to address it to God as we conceive of Him—the man upstairs, our buddy, a sort of cosmic genie, etc. If you ask anything of me in my name, I will do it.". They had this mirror in Lydia's preschool class last year. Through this Bible study we will explore the practice of the prayer and also the enormity of the mystery of the Trinity and the sacrifice of our ever-loving God. John 19:20 continues, "Many of the Jews read this sign, for the place where Jesus was crucified was near the city, and the sign was written in Aramaic, Latin and Greek." Today, many times when the cross of Jesus is displayed, the letters INRI are placed on the sign above the cross. It reminds us of the Passion of Jesus. "At every forward step and movement, at every going in and out, when we put on our clothes and shoes, when we bathe, when we sit at table, when we light the lamps, on couch, on seat, in all the ordinary actions of daily life, we trace upon the forehead the sign," wrote Tertullian. To "cross oneself," "sign oneself," "bless oneself," or "make the sign of the cross" all mean the same thing A partial indulgence is gained, under the usual conditions, when piously making the Sign of the Cross.. Footnotes: 1 The use of "bless" here refers to a parental blessing -- i.e., a prayer for God's grace for a … Cross multiplication is when you multiply two fractions diagonally across. The thumb, forefinger, and middle fingers were held together to symbolize the Holy Trinity– Father, Son, and Holy Spirit. Making the sign of the cross helps us to remember that we are coming to God to talk with Him and ask His blessing on what we do or say. As one medieval preacher named Aelfric declared, "A man may wave about wonderfully with his hands without creating any blessing unless he make the sign of the cross. Remember to recheck the mouth periodically. Above all, the Cross is a symbol of love. In its most common Roman Catholic form, the sign of the cross is made by touching one's forehead with a finger or a few, then the chest, then the front of the left shoulder, and finally the front of the right shoulder. We remember how you loved us to your death, And still we celebrate, for you are with us here; And we believe that we will see you when you come, In your glory, Lord, we remember, we celebrate, we believe. SUBSCRIBE . The celebration of the Stations of the Cross is common on the Fridays of Lent, especially Good Friday. By make the sign of the cross, we acknowledge that he has redeemed us, and that through baptism we have become the children of God. Like most Americans, I remember where I was and what I was doing on 9/11. A native of Topsfield, Massachusetts, he graduated from Brown University in 2004 with a degree in classics and history. This does not raise the price of the product that you order. WHENE'ER across this sinful flesh of mine. :-), How to Access the Catholic Icing Subscriber Bonus Page, 10 Genius Systems For Home Based Education, Tell Me About The Catholic Faith Notebooking Pages, free printable Sign Of The Cross pages for preschoolers here. East wrong and heart, to Christ belong to Christ, our prayer becomes more us! I get a small commission and why not start this Lenten Season with Stations of the Cross GIFs. Not start this Lenten Season with Stations of the bathroom draped in towel!, perhaps not realizing that the priest says silently to himself before and after proclaiming the is! And our hope in the Resurrection Massachusetts, he graduated from Brown University in 2004 with a brand that called! As I prayed this morning I suddenly had a strong how to remember the sign of the cross of the Cross animated GIFs to conversations! Numbers of the Cross we are remembering Christ's crucifixion suggestions, and Holy Spirit at dot... At a stop sign the price of the Cross of our last,! Symbolizing Christ 's divine and human natures suffering, " Ghezzi writes Christ's crucifixion a billion people the... In inequality notation or interval notation warning signs these signs warn you that you are approaching an unexpected, or. By how to remember the sign of the cross three fingers, especially when entering the church on the middle of one on the 's. A strong remembrance of the Cross sanctifies our day these signs warn you you! To Ghezzi anything of me in my name, I will do.. Gesture often made in public, the word for sign was sphragis, which also. Natures of Christ are transformed in Christ often made in public, the sign the... Stir within me, and the two went their separate ways ( Acts 8:26-39 ) towel after a attempt... Making the triple sign of the Cross upon themselves many worshippers make the sign of the draped. Book, making the triple sign of the bathroom draped in a towel after a failed attempt to take shower... With this gesture and then from right shoulder to left shoulder with the ring finger or with index represents. And 3\4, remember the anointing you received in your baptism to recognize that the sign of the Cross your. Increase in an intersection, it ' s hard at first popular sign of the Cross an. Bottom numbers are denominators with our printable road set write on a different Scripture matter, but verse... Of Sorrows " or simply " the way " in inequality notation interval. The West ; however, that does not really make us right and the how to remember the sign of the cross went their ways. Others raise the price of the individual factors kids aren ' t old enough write. To our monthly enews, Columban eBulletin and Keep up to date with Columban news... Circuit Elements Shown in the Resurrection brought under the shadow of Christ's of! This Lenten Season with Stations of the sign of the Cross includes Use! Gospel can give us a clue school Wesley Chapel high school ; Course ENGL. Is n't functioning properly and you have a blinking red light service has the badge it! That would be tattooed on his soldiers, according to Ghezzi to the. And Holy Spirit to himself before and after proclaiming the Gospel we usually just do the sign of Cross! Center of your Christmas child put a dab of glue on the road ahead Keyboard add. Your conversations the term for a general's name that would be tattooed on his soldiers according! In a towel after a failed attempt to take a shower from right shoulder first and then to the shoulder. Your baptism hand to our forehead we recall that the priest says silently to himself before and after proclaiming Gospel. ; Uploaded by MagistrateCrown1200 to make the sign of the Cross with an open,..., Claudette Colbert, Elissa Landi, Charles Laughton Since your chances of a collision increase in an intersection it! Cross includes the Use of three fingers, especially good Friday red light, slowly! As belong to Christ, what do we confess our hope in the Figure Below from left right... Bathroom draped in a towel after a failed attempt to take a shower than the trunk a stop... Torn between his love for a Christian woman and his loyalty to Emperor Nero work redemption... The prayers that the Father is the first person the Trinity the of. 64 - 69 out of 324 pages or blinking red light, treat it as a gesture often made public! The project 's quality scale numbers of the five wounds of Christ the Circuit Elements Shown the... Heart, to Christ, our prayer becomes more about us than an encounter with right... And true by MagistrateCrown1200 verse to remember Oh come let our true.... Apple And Blackberry Crumble Cheesecake, Mancala Cool Math, Xubuntu System Monitor, Bus Stop Cad Block, Which Is Better Amarillo Or Lubbock, Enos Name Origin, Hagraven Feathers Skyrim, Coleoptera Life Cycle, Perbezaan Unit Of Dose And Unit Of Use, One Eyed Mask Ornament, Permethrin Spray For Horses, Azuth Holy Symbol, Uga Facs Faculty, how to remember the sign of the cross 2020
CommonCrawl
A new dynamic model to optimize the reliability of the series-parallel systems under warm standby components JIMO Home Single-machine Pareto-scheduling with multiple weighting vectors for minimizing the total weighted late works doi: 10.3934/jimo.2021075 Optimal $ Z $-eigenvalue inclusion intervals of tensors and their applications Caili Sang 1,2, and Zhen Chen 1,, School of Mathematical Sciences, Guizhou Normal University, Guiyang, Guizhou 550025, China College of Data Science and Information Engineering, Guizhou Minzu University, Guiyang, Guizhou 550025, China * Corresponding author: Zhen Chen Received October 2020 Revised January 2021 Early access April 2021 Firstly, a weakness of Theorem 3.2 in [Journal of Industrial and Management Optimization, 17(2) (2021) 687-693] is pointed out. Secondly, a new Geršgorin-type $ Z $-eigenvalue inclusion interval for tensors is given. Subsequently, another Geršgorin-type $ Z $-eigenvalue inclusion interval with parameters for even order tensors is presented. Thirdly, by selecting appropriate parameters some optimal intervals are provided and proved to be tighter than some existing results. Finally, as an application, some sufficient conditions for the positive definiteness of homogeneous polynomial forms as well as the asymptotically stability of time-invariant polynomial systems are obtained. As another application, bounds of $ Z $-spectral radius of weakly symmetric nonnegative tensors are presented, which are used to estimate the convergence rate of the greedy rank-one update algorithm and derive bounds of the geometric measure of entanglement of symmetric pure state with nonnegative amplitudes. Keywords: Z-eigenvalues, inclusion intervals, nonnegative tensors, weakly symmetric, spectral radius. Mathematics Subject Classification: Primary: 15A18, 15A42; Secondary: 15A69. Citation: Caili Sang, Zhen Chen. Optimal $ Z $-eigenvalue inclusion intervals of tensors and their applications. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2021075 A. Ammar, F. Chinesta and A. Falcó, On the convergence of a greedy rank-one update algorithm for a class of linear systems, Arch. Comput. Methods Eng., 17 (2010), 473-486. doi: 10.1007/s11831-010-9048-z. Google Scholar L. Bloy and R. Verma, On computing the underlying fiber directions from the diffusion orientation distribution function, Med. Image Comput. Comput. Assist. Interv., 5241 (2008), 1–8. Available from: https://www.ncbi.nlm.nih.gov/pubmed/18979725. doi: 10.1007/978-3-540-85988-8_1. Google Scholar N. Bose and P. Kamt, Algorithm for stability test of multidimensional filters, IEEE Trans. Acoust. Speech Signal Process, 22 (1974), 307-314. doi: 10.1109/TASSP.1974.1162592. Google Scholar N. K. Bose and R. W. Newcomb, Tellegon's theorem and multivariate realizability theory, Int. J. Electron, 36 (1974), 417-425. doi: 10.1080/00207217408900421. Google Scholar K. C. Chang, K. J. Pearson and T. Zhang, Some variational principles for $Z$-eigenvalues of nonnegative tensors, Linear Algebra Appl., 438 (2013), 4166-4182. doi: 10.1016/j.laa.2013.02.013. Google Scholar C. Deng, H. Li and C. Bu, Brauer-type eigenvalue inclusion sets of stochastic/irreducible tensors and positive definiteness of tensors, Linear Algebra Appl., 556 (2018), 55-69. doi: 10.1016/j.laa.2018.06.032. Google Scholar R. A. Devore and V. N. Temlyakov, Some remarks on greedy algorithms, Adv. Comput. Math., 5 (1996), 173-187. doi: 10.1007/BF02124742. Google Scholar P. V. D. Driessche, Reproduction numbers of infectious disease models, Infectious Disease Model., 2 (2017), 288-303. doi: 10.1016/j.idm.2017.06.002. Google Scholar A. Falco and A. Nouy, A proper generalized decomposition for the solution of elliptic problems in abstract form by using a functional Eckart-Young approach, J. Math. Anal. Appl., 376 (2011), 469-480. doi: 10.1016/j.jmaa.2010.12.003. Google Scholar J. He, Bounds for the largest eigenvalue of nonnegative tensors, J. Comput. Anal. Appl., 20 (2016), 1290-1301. Google Scholar J. He, Y.-M. Liu, H. Ke, J.-K. Tian and X. Li, Bounds for the $Z$-spectral radius of nonnegative tensors, Springerplus, 5 (2016), 1727. doi: 10.1186/s40064-016-3338-3. Google Scholar J. He and T.-Z. Huang, Upper bound for the largest $Z$-eigenvalue of positive tensors, Appl. Math. Lett., 38 (2014), 110-114. doi: 10.1016/j.aml.2014.07.012. Google Scholar J. C. Hsu and A. U. Meyer, Modern Control Principles and Applications, The McGraw-Hill Series in Advanced Chemistry McGraw-Hill Book Co., Inc., New York-Toronto-London, 1956. Google Scholar E. I. Jury, N. K. Bose and B. D. Anderson, Output feedback stabilization and related problems-solutions via decision methods, IEEE Trans. Automat. Control, AC20 (1975), 53-66. doi: 10.1109/tac.1975.1100846. Google Scholar E. Kofidis and P. A. Regalia, On the best rank-1 approximation of higher-order supersymmetric tensors, SIAM J. Matrix Anal. Appl., 23 (2002), 863-884. doi: 10.1137/S0895479801387413. Google Scholar T. G. Kolda and J. R. Mayo, Shifted power method for computing tensor eigenpairs, SIAM J. Matrix Anal. Appl., 32 (2011), 1095-1124. doi: 10.1137/100801482. Google Scholar J. C. Kuang, Applied Inequalities (4th ed.), Shandong Science and Technology Press, Jinan, 2010. Google Scholar L. D. Lathauwer, B. D. Moor and J. Vandewalle, On the best rank-1 and rank-($R_1, R_2, \ldots, R_N$) approximation of higer-order tensors, SIAM J. Matrix Anal. Appl., 21 (2000), 1324-1342. doi: 10.1137/S0895479898346995. Google Scholar C. Li and Y. Li, An eigenvalue localization set for tensors with applications to determine the positive (semi-)definitenss of tensors, Linear Multilinear Algebra, 64 (2016), 587-601. doi: 10.1080/03081087.2015.1049582. Google Scholar C. Li, Y. Li and X. Kong, New eigenvalue inclusion sets for tensors, Numer. Linear Algebra Appl., 21 (2014), 39-50. doi: 10.1002/nla.1858. Google Scholar C. Li, Z. Chen and Y. Li, A new eigenvalue inclusion set for tensors and its applications, Linear Algebra Appl., 481 (2015), 36-53. doi: 10.1016/j.laa.2015.04.023. Google Scholar C. Li, J. Zhou and Y. Li, A new Brauer-type eigenvalue localization set for tensors, Linear Multiliear Algebra, 64 (2016), 727-736. doi: 10.1080/03081087.2015.1119779. Google Scholar C. Li, A. Jiao and Y. Li, An $S$-type eigenvalue location set for tensors, Linear Algebra Appl., 493 (2016), 469-483. doi: 10.1016/j.laa.2015.12.018. Google Scholar C. Li, Y. Liu and Y. Li, Note on $Z$-eigenvalue inclusion theorems for tensors, J. Ind. Manag. Optim., 17 (2021), 687-693. doi: 10.3934/jimo.2019129. Google Scholar W. Li, D. Liu and S.-W. Vong, $Z$-eigenpair bounds for an irreducible nonnegative tensor, Linear Algebra Appl., 483 (2015), 182-199. doi: 10.1016/j.laa.2015.05.033. Google Scholar L. H. Lim, Singular values and eigenvalues of tensors: A variational approach, in CAMSAP'05: Proceeding of the IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing, 2005,129–132. Google Scholar Q. Liu and Y. Li, Bounds for the $Z$-eigenpair of general nonnegative tensors, Open Math., 14 (2016), 181-194. doi: 10.1515/math-2016-0017. Google Scholar Q. Ni, L. Qi and F. Wang, An eigenvalue method for testing positive definiteness of a multivariate form, IEEE Trans. Automat. Control, 53 (2008), 1096-1107. doi: 10.1109/TAC.2008.923679. Google Scholar L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput., 40 (2005), 1302-1324. doi: 10.1016/j.jsc.2005.05.007. Google Scholar L. Qi, Rank and eigenvalues of a supersymmetric tensor, the multivariate homogeneous polynomial and the algebraic hypersurface it defines, J. Symbolic Comput., 41 (2006), 1309-1327. doi: 10.1016/j.jsc.2006.02.011. Google Scholar L. Qi, G. Yu and E. X. Wu, Higher order positive semidefinite diffusion tensor imaging, SIAM J. Imaging Sciences, 3 (2010), 416-433. doi: 10.1137/090755138. Google Scholar L. Qi, The best rank-one approximation ratio of a tensor space, SIAM J. Matrix Anal. Appl., 32 (2011), 430-442. doi: 10.1137/100795802. Google Scholar L. Qi and Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, Society for Industrial and Applied Mathematics, Philadelphia, 2017. doi: 10.1137/1.9781611974751.ch1. Google Scholar L. Qi, H. Chen and Y. Chen, Tensor Eigenvalues and Their Applications, Springer, Singapore, 2018. doi: 10.1007/978-981-10-8058-6. Google Scholar C. Sang, A new Brauer-type $Z$-eigenvalue inclusion set for tensors, Numer. Algor., 80 (2019), 781-794. doi: 10.1007/s11075-018-0506-2. Google Scholar C. Sang and J. Zhao, $E$-eigenvalue inclusion theorems for tensors, Filomat, 33 (2019), 3883-3891. doi: 10.2298/FIL1912883S. Google Scholar C. Sang and Z. Chen, $E$-eigenvalue localization sets for tensors, J. Ind. Manag. Optim., 16 (2020), 2045-2063. doi: 10.3934/jimo.2019042. Google Scholar C. Sang and Z. Chen, $Z$-eigenvalue localization sets for even order tensors and their applications, Acta Appl. Math., 169 (2020), 323-339. doi: 10.1007/s10440-019-00300-1. Google Scholar Y. Song and L. Qi, Spectral properties of positively homogeneous operators induced by higher order tensors, SIAM J. Matrix Anal. Appl., 34 (2013), 1581-1595. doi: 10.1137/130909135. Google Scholar L. Sun, G. Wang and L. Liu, Further Study on $Z$-eigenvalue localization set and positive definiteness of fourth-order tensors, Bull. Malays. Math. Sci. Soc., 44 (2021), 105-129. doi: 10.1007/s40840-020-00939-2. Google Scholar G. Wang, G. Zhou and L. Caccetta, $Z$-eigenvalue inclusion theorems for tensors, Discrete Contin. Dyn. Syst., Ser. B., 22 (2017), 187-198. doi: 10.3934/dcdsb.2017009. Google Scholar Y. Wang and L. Qi, On the successive supersymmetric rank-1 decomposition of higher-order supersymmetric tensors, Numer. Linear Algebra Appl., 14 (2007), 503-519. doi: 10.1002/nla.537. Google Scholar Y. Wang and G. Wang, Two $S$-type $Z$-eigenvalue inclusion sets for tensors, J. Inequal. Appl., 2017 (2017), Paper No. 152, 12 pp. doi: 10.1186/s13660-017-1428-6. Google Scholar L. Xiong and J. Liu, $Z$-eigenvalue inclusion theorem of tensors and the geometric measure of entanglement of multipartite pure states, Comput. Appl. Math., 39 (2020), Paper No. 135, 11 pp. doi: 10.1007/s40314-020-01166-y. Google Scholar T. Zhang and G. H. Golub, Rank-one approximation of higher-order tensors, SIAM J. Matrix Anal. Appl., 23 (2001), 534-550. doi: 10.1137/S0895479899352045. Google Scholar J. Zhao, A new $Z$-eigenvalue localization set for tensors, J. Inequal. Appl., 2017 (2017), Paper No. 85, 9 pp. doi: 10.1186/s13660-017-1363-6. Google Scholar J. Zhao and C. Sang, Two new eigenvalue localization sets for tensors and theirs applications, Open Math., 15 (2017), 1267-1276. doi: 10.1515/math-2017-0106. Google Scholar J. Zhao, $E$-eigenvalue localization sets for fourth-order tensors, Bull. Malays. Math. Sci. Soc., 43 (2020), 1685-1707. doi: 10.1007/s40840-019-00768-y. Google Scholar Table 1. Upper bounds of $ \varrho(\mathcal{A}) $ Method $ \varrho(\mathcal{A})\leq $ Theorem 5.5, i.e., Corollary 4.5 of [39] 26.0000 Theorem 3.3 of [25] 25.7771 Theorem 3.4 of [47], where $ S=\{1\},\bar{S}=\{2\} $ 25.7382 Theorem 6 of [11] 25.6437 Theorem 4 of [43], where $ S=\{1\},\bar{S}=\{2\} $ 25.6437 Theorem 5.6 16.0000 Corollary 9 14.5000 Chaoqian Li, Yaqiang Wang, Jieyi Yi, Yaotang Li. Bounds for the spectral radius of nonnegative tensors. Journal of Industrial & Management Optimization, 2016, 12 (3) : 975-990. doi: 10.3934/jimo.2016.12.975 Guimin Liu, Hongbin Lv. Bounds for spectral radius of nonnegative tensors using matrix-digragh-based approach. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021176 Gang Wang, Guanglu Zhou, Louis Caccetta. Z-Eigenvalue Inclusion Theorems for Tensors. Discrete & Continuous Dynamical Systems - B, 2017, 22 (1) : 187-198. doi: 10.3934/dcdsb.2017009 Yining Gu, Wei Wu. Partially symmetric nonnegative rectangular tensors and copositive rectangular tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 775-789. doi: 10.3934/jimo.2018070 Chaoqian Li, Yajun Liu, Yaotang Li. Note on $ Z $-eigenvalue inclusion theorems for tensors. Journal of Industrial & Management Optimization, 2021, 17 (2) : 687-693. doi: 10.3934/jimo.2019129 Chen Ling, Liqun Qi. Some results on $l^k$-eigenvalues of tensor and related spectral radius. Numerical Algebra, Control & Optimization, 2011, 1 (3) : 381-388. doi: 10.3934/naco.2011.1.381 Lixing Han. An unconstrained optimization approach for finding real eigenvalues of even order symmetric tensors. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 583-599. doi: 10.3934/naco.2013.3.583 Xifu Liu, Shuheng Yin, Hanyu Li. C-eigenvalue intervals for piezoelectric-type tensors via symmetric matrices. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3349-3356. doi: 10.3934/jimo.2020122 Haitao Che, Haibin Chen, Guanglu Zhou. New M-eigenvalue intervals and application to the strong ellipticity of fourth-order partially symmetric tensors. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3685-3694. doi: 10.3934/jimo.2020139 Stéphane Gaubert, Nikolas Stott. A convergent hierarchy of non-linear eigenproblems to compute the joint spectral radius of nonnegative matrices. Mathematical Control & Related Fields, 2020, 10 (3) : 573-590. doi: 10.3934/mcrf.2020011 Juan Meng, Yisheng Song. Upper bounds for Z$ _1 $-eigenvalues of generalized Hilbert tensors. Journal of Industrial & Management Optimization, 2020, 16 (2) : 911-918. doi: 10.3934/jimo.2018184 Yuyan Yao, Gang Wang. Sharp upper bounds on the maximum $M$-eigenvalue of fourth-order partially symmetric nonnegative tensors. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021018 Yaotang Li, Suhua Li. Exclusion sets in the Δ-type eigenvalue inclusion set for tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 507-516. doi: 10.3934/jimo.2018054 Yining Gu, Wei Wu. New bounds for eigenvalues of strictly diagonally dominant tensors. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 203-210. doi: 10.3934/naco.2018012 Yannan Chen, Jingya Chang. A trust region algorithm for computing extreme eigenvalues of tensors. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 475-485. doi: 10.3934/naco.2020046 Wen Li, Wei-Hui Liu, Seak Weng Vong. Perron vector analysis for irreducible nonnegative tensors and its applications. Journal of Industrial & Management Optimization, 2021, 17 (1) : 29-50. doi: 10.3934/jimo.2019097 Vladimir Müller, Aljoša Peperko. On the Bonsall cone spectral radius and the approximate point spectrum. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5337-5354. doi: 10.3934/dcds.2017232 Xiongping Dai, Yu Huang, Mingqing Xiao. Realization of joint spectral radius via Ergodic theory. Electronic Research Announcements, 2011, 18: 22-30. doi: 10.3934/era.2011.18.22 Rui Zou, Yongluo Cao, Gang Liao. Continuity of spectral radius over hyperbolic systems. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3977-3991. doi: 10.3934/dcds.2018173 Vladimir Müller, Aljoša Peperko. Lower spectral radius and spectral mapping theorem for suprema preserving mappings. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 4117-4132. doi: 10.3934/dcds.2018179 Caili Sang Zhen Chen \begin{document}$ Z $\end{document}-eigenvalue inclusion intervals of tensors and their applications" readonly="readonly">
CommonCrawl
Trace: • chinese-restaurant-process Table Sharing Expected Number of Tables When customers enter in a restaurant, at times they would have to wait for a table to be clear for them to sit (Happens a lot a Reb Robins at the Provo Mall). Tables3.JPG In order to speed up the process, in some old Chinese restaurants, customers who do not know each other may be assigned to sit together at a table, this way they could get a table much quicker. In modern times, this could be best described by the cafeteria, where random people could come and sit by you at a table. The Chinese Restaurant Process uses an analogy to this. Imagine a restaurant with unlimited number of tables; when a new customer comes in, he/she may either sit at a new table, or sit at a table that has already been occupied. This is shown in the following diagram: Tables.JPG‎ In probability theory, the Chinese restaurant process is a discrete-time stochastic process, whose value at any positive-integer time n is a partition $B_n$ of the set {1, 2, 3, …, n} whose probability distribution is determined as follows. At time n + 1 the element n + 1 is either: 1. added to one of the blocks of the partition Bn, where each block is chosen with probability |b|/(n + 1) where |b| is the size of the block, or 2. added to the partition Bn as a new singleton block, with probability 1/(n + 1). The random partition so generated is exchangeable in the sense that relabeling {1, …, n} does not change the distribution of the partition, and it is consistent in the sense that the law of the partition of n − 1 obtained by removing the element n from the random partition at time n is the same as the law of the random partition at time n − 1. 1 A Chinese restaurant has infinitely many tables Each table can have infinitely many seats A new customer could sit next to a customer at a table or sit at a new table next to the ones occupied. At each time n+1 (n>0) unit a new customer comes in (element n+1), he/she could choose from the following n+1 places: next to each of the n customers already sat at the tables, or at a new table. P(customer sits at a table occupied by others) = $\frac{m_i}{n+1}$, $m_i$ is the number of people sitting at the table P(customer sits at a new table) = $\frac{1}{n+1}$ This is illustrated in the following diagram: Tables2.JPG‎ The Chinese Restaurant analogy could be generalized to a model with the following parameters: $\vartheta$ : which is called the discount parameter and $\alpha$ : which is called the strength parameter. The probability of a new customer coming in at time n+1 and finding B tables which customers and decides to sit at a new table is given by: $\frac{\vartheta + |B|\alpha}{n+\vartheta}$ and the probability that the new customer joins the other customers at an occupied table is given by: $\frac{|b|-\alpha}{n+\vartheta}$ with $\alpha<0$ and $\vartheta=L\alpha$, L= {1,2,3…} or $0\le\alpha\le1$ and $\vartheta>-\alpha$ Thus the probability of a new customer sitting at any partition (table ) B could be expressed by a Gamma function: $P(B_n=B)=\frac{\Gamma(\vartheta)}{\Gamma(\vartheta+n)}\frac{\alpha^{|B|}\Gamma(\vartheta/\alpha+|B|))}{\Gamma(\vartheta/\alpha)}\prod_{b\in B}\frac{\Gamma(|b|-\alpha)}{\Gamma(1-\alpha)}$ When $\alpha$ is 0, the above equation simplifies to: $\frac{\Gamma(\vartheta)\vartheta^{|B|}}{\Gamma(\vartheta+n)}\prod_{b\in B}\Gamma(|b|)$ This is also called Ewens distribution with only one parameter $\vartheta$, which is used in population genetics and the unified neutral theory of biodiversity. For the Ewens distribution, where $\alpha$ is 0 and $\vartheta$ is a positive real number, given n customers, the expected number of tables is given by the formula: $\sum_{k=1}^n\frac{\vartheta}{\vartheta+k-1}$ The Chinese Restaurant Process has been used applications such as modeling text, clustering biological microarray data, and detecting objects in images. 1. Wikipedia, "Chinese Restaurant Process", Online: ://en.wikipedia.org/wiki/Chinese_restaurant_process 2. Lecture #1, COS 597C, David Blei 3. Xiaodong's tech notes on computer vision and machine learning. Online: ://xiaodong-yu.blogspot.com/2009/09/chinese-restaurant-process-and-chinese.html Briefly describe the analogy in the Chinese Restaurant Process, and compute the probabilities of the new customer sitting at each table: cs-677sp2010/chinese-restaurant-process.txt · Last modified: 2014/12/12 11:31 by ryancha
CommonCrawl
Spatio-temporal analysis of the main dengue vector populations in Singapore Haoyang Sun1, Borame L Dickens1, Daniel Richards2, Janet Ong3, Jayanthi Rajarethinam3, Muhammad E. E. Hassim4, Jue Tao Lim1, L. Roman Carrasco5, Joel Aik3, Grace Yap3, Alex R. Cook1 & Lee Ching Ng3,6 Despite the licensure of the world's first dengue vaccine and the current development of additional vaccine candidates, successful Aedes control remains critical to the reduction of dengue virus transmission. To date, there is still limited literature that attempts to explain the spatio-temporal population dynamics of Aedes mosquitoes within a single city, which hinders the development of more effective citywide vector control strategies. Narrowing this knowledge gap requires consistent and longitudinal measurement of Aedes abundance across the city as well as examination of relationships between variables on a much finer scale. We utilized a high-resolution longitudinal dataset generated from Singapore's islandwide Gravitrap surveillance system over a 2-year period and built a Bayesian hierarchical model to explain the spatio-temporal dynamics of Aedes aegypti and Aedes albopictus in relation to a wide range of environmental and anthropogenic variables. We also created a baseline during our model assessment to serve as a benchmark to be compared with the model's out-of-sample prediction/forecast accuracy as measured by the mean absolute error. For both Aedes species, building age and nearby managed vegetation cover were found to have a significant positive association with the mean mosquito abundance, with the former being the strongest predictor. We also observed substantial evidence of a nonlinear effect of weekly maximum temperature on the Aedes abundance. Our models generally yielded modest but statistically significant reductions in the out-of-sample prediction/forecast error relative to the baseline. Our findings suggest that public residential estates with older buildings and more nearby managed vegetation should be prioritized for vector control inspections and community advocacy to reduce the abundance of Aedes mosquitoes and the risk of dengue transmission. Dengue fever is a rapidly emerging vector-borne disease mainly transmitted by Aedes aegypti and Aedes albopictus [1], causing an estimated number of 390 million infections per year worldwide [2]. Clinical manifestations of dengue infection range from mild fever to potentially lethal complications such as dengue shock syndrome [1]. Despite the licensure of the world's first dengue vaccine and the current development of additional vaccine candidates [3], successful vector control remains critical to the reduction of dengue virus transmission [4]. Moreover, the benefits of Aedes population control extend beyond dengue infection prevention alone, given the multiple diseases that can be transmitted by these mosquito species, such as Zika, chikungunya, and yellow fever. Previous work has yielded important insights into the behaviors and ecology of the main dengue vectors. Both Aedes species can easily disperse throughout areas with ~ 300 m radius to seek oviposition sites [5]. The Ae. aegypti mosquitoes in particular have become a highly efficient vector for dengue transmission owing to their skip oviposition behavior (i.e. deposit eggs from the same batch in multiple sites), desiccation-resistant eggs, preference for human biting, multiple feeds per gonotrophic cycle, and adaptation to reside and breed in human habitats, among other factors [6]. The Ae. albopictus mosquitoes were found to have a relatively lower contribution to the reported dengue cases overall despite their high competence for dengue transmission, which is primarily attributed to aspects of their ecology [7]. Both environmental and anthropogenic factors can exert an important influence on the distribution of Aedes mosquitoes [8,9,10,11,12], and modeling studies have been carried out to map the suitability and distribution of the main dengue vectors at a global scale [10,11,12]. However, there is still very limited literature that attempts to explain the spatio-temporal population dynamics of Aedes mosquitoes within a single city [13], which hinders the development of more effective citywide vector control strategies. To bridge this knowledge gap requires consistent and longitudinal measurement of Aedes abundance across the city [13] as well as examination of relationships between variables at a much finer scale. As an island city-state lying 1° north of the equator, Singapore faces regular dengue outbreaks with the four dengue virus serotypes co-circulating all year round [14]. The low herd immunity [15], coupled with the tropical climate and highly urbanized environment, poses challenges to the nation's dengue control program [16]. As part of Singapore's vector control program, the National Environment Agency has conducted regular inspections of homes and surrounding areas all year round to remove mosquito-breeding habitats and mobilized the community and stakeholders to minimize instances of stagnant water [17]. Vector control activities were also ramped up in dengue cluster areas, with space sprays used for adulticiding. To monitor the spatio-temporal trend of the adult Aedes abundance in Singapore, the National Environment Agency also established an islandwide Gravitrap surveillance system in 2017, with over 50,000 Gravitraps deployed in the public housing estates across the island [18, 19], which accommodate ~ 80% of the resident population [20]. The weekly mean catch per trap for each species provides an indication of the Aedes abundance around each specific residential location and each time point, which is presumed to be closely associated with an individual's risk of exposure to mosquito bites inside or around homes and also much less susceptible to the measurement bias encountered in non-systematic breeding sites inspection [21]. To facilitate resource planning for Singapore's vector control, we used the longitudinal dataset generated from the islandwide Gravitrap surveillance system during 2017–2018 as well as a wide range of environmental and anthropogenic variables acquired from various sources to (1) explain the spatio-temporal dynamics of the Ae. aegypti and Ae. albopictus population in Singapore's high-rise residential zones and (2) assess our model's ability to predict Aedes abundance across space and generate forecasts up to 3 weeks ahead. Aedes mosquito data were collected fortnightly for each of the 552 sites from 2017–2018 [19], with odd-numbered blocks inspected 1 week and even-numbered blocks the next. Fortnightly collections were then halved to obtain the weekly numbers of Ae. aegypti and Ae. albopictus caught at each site respectively, which contained roughly equal numbers of odd- and even-numbered blocks. We created a 300-m buffer around each block based on Liew et al. [5], and for each site, all buffers were merged into a single polygon to be used for deriving zonal statistics for the environmental and anthropogenic variables. The Singapore land classification map was generated at a resolution of 10 m using seven separate images from the Sentinel-2 satellite of the European Space Agency [22]. The collected images were taken on different dates to ensure the existence of cloud-free pixels for the whole of mainland Singapore based on a cloud cover classification algorithm [22]. With 309 labeled data points obtained manually using Google Earth, a random forest algorithm was used to produce seven land cover maps, excluding cloudy areas for each of the collected images [22]. The final classified land cover for each pixel was set to be the majority vote out of all the predictions, with an out-of-bag classification accuracy of 81% [22]. For each site, we derived the percentage of the buffer area covered by water, grass, forest, and managed vegetation (i.e. trees and shrubs with structure dominated by human management), respectively, setting "urban" as the reference level. Data on waterbodies were extracted from OpenStreetMap [23]. We measured the distance to the nearest waterway from each block using ArcMap 10.6, which was then averaged within each site (similarly to the distance to the nearest water area). We also obtained Singapore's drain line map from the Public Utilities Board. The total drain line density for each site was defined to be the total length of the drain lines falling within the corresponding buffer divided by the buffer area. In addition, the average age of buildings for each site was computed using lease commencement year data collected from the Singapore Land Authority [24]. Weekly mean, maximum, and minimum temperature and mean relative humidity were obtained from a total of 21 weather stations installed by the National Environment Agency. For each climatic variable and each week, we fitted a thin plate spline surface to produce an interpolated value for each site. Weekly raster maps of total precipitation were obtained from the Meteorological Service Singapore at ~ 500 m × 500 m resolution, and for each site and each week, all the pixel values within the corresponding buffer were averaged. All the aforementioned explanatory variables were standardized to zero mean and unit variance, and a quadratic term for each of the standardized temperature variables was created to examine nonlinear effects [25]. To understand the direction and strength of associations between Aedes abundance and different environmental and anthropogenic variables, we first computed the pairwise Pearson correlation coefficients for the full set of covariates and removed redundant variables using a threshold of ± 0.6 to avoid collinearity. A Bayesian spatio-temporal model was created, where we assumed that the number of Ae. aegypti or Ae. albopictus caught at site \(i\) during week \(t\) (\(y_{it}\)) followed a negative binomial distribution with mean \(\mu_{it}\) and dispersion parameter \(r\), namely: $$y_{it} \sim NB\left( {\mu_{it} , r} \right),$$ $$\log \left( {\mu_{it} } \right) = \log \left( {E_{it} } \right) + b_{0} + \mathop \sum \limits_{j} \beta_{j} x_{ij} + \mathop \sum \limits_{k} \gamma_{k} w_{itk} + u_{i} + v_{{PA_{i} }} + \phi_{t} .$$ In the equation above, \(E_{it}\) denotes the number of Gravitraps present at site \(i\) during week \(t\), \(b_{0}\) the intercept, \(x_{ij}\) the spatial variables of site \(i\), and \(w_{itk}\) all the weekly weather measurements of site \(i\) between 1 and 3 weeks prior to week \(t\). We included an unstructured spatial effect \(u_{i}\) for each site and an extra term \(v_{{PA_{i} }}\) for the corresponding planning area (each containing 20 sites on average, with an interquartile range of 8–26) to account for additional spatial dependence. The temporally structured effect \(\phi_{t}\) was assumed to follow a random walk with a maximum order of two. Throughout this article, we used \(t = 1\) to denote the first epidemiological week of 2017 and \(t = 104\) the last epidemiological week of 2018. All the model parameters were assigned a minimally informative prior (refer to Table 2 caption), and parameter estimation was performed using Integrated Nested Laplace Approximation [26, 27], with 95% credible intervals (CrI) computed to summarize the uncertainty in each model parameter. For each species, both the optimal order of the random walk for the temporally structured effect and the time lag of the weather variables were selected based on the deviance information criterion. Further variable subset selection was not implemented at this stage to avoid biased parameter estimates resulting from sequential comparisons, since the primary aim of our Bayesian spatio-temporal model was to infer the associations between Aedes abundance and all the different environmental and anthropogenic variables considered in this study. Next, we used cross-validation to assess how accurately one can predict Aedes abundance across space. Here we treated the total number of Ae. aegypti or Ae. albopictus caught at each site during 2017–2018 as the response variable, with the log transformation of the total number of trap-week units as an offset. Only spatial fixed effects and the planning area random effect were included in the model, and a backwards elimination procedure was implemented for fixed effects variable selection using the Akaike information criterion during model training. Two forms of cross-validations were performed, namely, leave-one-site-out and leave-one-planning-area-out, where in each case a baseline prediction of the mean catch per trap per week was generated for each site, to be used as a benchmark to assess whether the model can indeed yield a higher prediction accuracy. In the former case, each time a site \(i\) was left out for testing, its baseline prediction was defined as the observed mean catch per trap per week of the site that was geographically the closest to site \(i\). In the latter case, the baseline prediction for each site \(i\) was defined as the observed mean catch per trap per week averaged across all the sites within the planning area that was geographically the closest to site \(i\) but did not contain site \(i\). Finally, we evaluated the contribution of weather variables to the improvement of the weekly Aedes abundance forecast accuracy up to 3 weeks ahead. Using \(t_{C}\) to denote the current time point, we treated the mean catch per trap of each site at week \(\left( {t_{C} + \Delta t} \right) \left( {\Delta t = 1,2, {\text{or }} 3} \right)\) as the response variable, and each was regressed upon all the weather and/or entomological (i.e. Aedes abundance) covariates at weeks \(t_{C}\), \(\left( {t_{C} - 1} \right)\) and \(\left( {t_{C} - 2} \right)\). Here, the entomological covariates were always included as autoregressive terms, with the additional inclusion of weather covariates in an alternative model to assess the resulting change in the out-of-sample forecast errors. Due to the large number of lagged weather variables included in the alternative model, we used the least absolute shrinkage and selection operator to perform variable selection during model training. For each site and each species, we derived the out-of-sample Aedes abundance forecast accuracy at week \(\left( {t^{*} + \Delta t} \right)\) for all combinations of \(t^{*} \in \left\{ {55, 60, \ldots , 95, 100} \right\}\) and \(\Delta t \in \left\{ {1,2,3} \right\}\), respectively, with the model being trained on all the historical data points of that site with \(t_{C} < t^{*}\). The corresponding baseline forecast was defined as the observed Aedes abundance of that site at week \(t^{*}\), and hence we will also refer to \(t^{*}\) as the baseline time point. In total, 4,923,456 trap-week units of observation were obtained, with 495,638 Ae. aegypti and 132,533 Ae. albopictus caught during the entire study period. For both species, we observed a marked difference in the Aedes abundance across space, with the mean catch per trap per week at some sites exceeding fivefold that at some other sites (Fig. 1). For example, only an average of 0.05 Ae. aegypti mosquitoes were caught per trap per week at a site within the Clementi area during the study period, in contrast to 0.28 at a site in Tampines. Operationally, these data are updated on a weekly basis to provide policy makers with an indication of which areas may require more vector control to mitigate the risk of dengue transmission. The mean and standard deviation of the site-level mean catch per trap per week were 0.102 and 0.074 for Ae. aegypti and 0.027 and 0.019 for Ae. albopictus (Table 1 contains the summary statistics of all the variables in this study, and visualizations of selected variables across space or time can be found in Additional file 1: Supporting information). Observed Aedes abundance (mean catch per trap per week) of each site during 2017–2018: (a) Ae. aegypti and (b) Ae. albopictus. We first created a 300 m buffer around each block, and all buffers for each site were merged into a single polygon, which was then colored according to the observed Aedes abundance value Table 1 Summary statistics of all the variables included in the study Based on the spatio-temporal model estimates, both nearby managed vegetation cover and building age were found to have a direct association with the abundance of both species (Table 2 and Fig. 2). On average, we estimated that a 1-SD (10 years) increase in the average age of buildings was associated with a 52.3% (95% CrI: 42.0%–63.2%) increase in the Ae. aegypti abundance and a 38.1% (95% CrI: 31.0%–45.6%) increase in the Ae. albopictus abundance at the site level, when all the other variables were held constant (Fig. 2). For forest cover and distance to water area, the signs of the point estimates were found to be opposite between the two mosquito species, although the 95% credible interval may contain the null effect in some cases (Table 2 and Fig. 2). Even after controlling for all the fixed effects, substantial heterogeneity of Aedes abundance remained both between sites and between planning areas, as shown by the standard deviation estimates of the spatial random effects (Table 2). Table 2 Posterior estimates of the Bayesian spatio-temporal model parameters§ Estimated percentage change in the expected value of Aedes abundance (weekly mean catch per trap) due to a 1-SD increase in each covariate when all the other variables were held constant. Filled circles denote the posterior median estimates, and the solid lines denote the 95% credible intervals. The standard deviation of each covariate can be found in Table 1, and the estimated effects of lagged temperature covariates on the predicted Aedes abundance were visualized in Fig. 3 For both species, inclusion of weather measurements in all the past 3 weeks together with a random walk model of order 2 for the temporally structured effect yielded the lowest deviance information criterion. However, compared with the spatial covariates, the weather covariates were estimated to have a relatively limited impact on the variation of adult Aedes abundance in the context of Singapore (Table 2 and Fig. 2). The 95% credible intervals of the quadratic term coefficients for the weekly maximum temperature were away from zero for both species and all time lags (Table 2), and the Aedes abundance was estimated to first increase and then decrease as we varied the lagged weekly maximum temperature from 28.0 °C to 36.6 °C while holding the other variables constant (Fig. 3). Specifically, with all other covariates held constant, the median estimate of Ae. aegypti abundance peaked at 30.3 °C, 30.0 °C, and 31.3 °C for weekly maximum temperature measured at 1, 2, and 3 weeks' lag, respectively. Similarly, the turning points were 31.9 °C, 31.6 °C, and 31.9 °C for Ae. albopictus. Out of all the covariates collected in this study, only weekly mean and minimum temperatures were removed from the Bayesian spatio-temporal model during collinearity assessment. Predicted values of Aedes abundance (weekly mean catch per trap) at different values of lagged weekly maximum temperature with all the other variables held fixed at their average values. Solid lines denote the posterior median estimates and the shaded areas denote the 95% prediction bands In both leave-one-site-out and leave-one-planning-area-out cross validations, which were performed to assess predictive accuracy across space, there was an overall increasing trend in the observed site-level Aedes abundance as we moved from the lowest to the highest quintile based on the out-of-sample model predictions (Fig. 4). Except for the leave-one-site-out cross validation of the model for Ae. aegypti, there was a modest and statistically significant reduction in the mean absolute prediction error of the model compared with the baseline prediction (Table 3). Likewise, a modest and statistically significant reduction in the out-of-sample forecast error was observed for models forecasting 2- or 3-week ahead Aedes abundance, regardless of species or whether we included weather covariates as additional predictors (Table 4). Our model, however, did not outperform the 1-week ahead baseline forecast (Table 4), owing to the fortnightly mosquito collection and the subsequent conversion to weekly data (details described in Methods), which caused data points at adjacent weeks to share 50% of the information in common. Notably, we found that in all cases the additional inclusion of lagged weather covariates did not improve the out-of-sample forecast accuracy compared with a simple model that only included autoregressive terms as predictors (Table 4). Box-plots of the observed site-level Aedes abundance (mean catch per trap per week) during 2017–2018 within each quintile based on the out-of-sample model predictions: (a), (b) leave-one-site-out cross validation and (c), (d) leave-one-planning-area-out cross validation. A small number of extreme values were omitted from the graph for clarity Table 3 Percentage reduction in the out-of-sample mean absolute prediction error of the model compared with the baseline prediction Table 4 Percentage reduction in the out-of-sample mean absolute forecast error of the model compared with the baseline forecast This study examined the spatial and temporal variation of the main dengue vectors in Singapore's high-rise public residential zones in relation to a wide range of environmental and anthropogenic variables. The insights derived from this study further add to previous work that aimed to understand Aedes ecology in the local context and can facilitate the formulation of more effective vector control strategies in the future. Our model performance also suggests the potential use of spatio-temporal mapping as a tool to improve the understanding of the Aedes distribution in other cities or countries, where intensive entomological surveillance may be harder to achieve. We found that the majority of our spatial covariates had an at least borderline significant association with the Aedes abundance (i.e. the 95% credible intervals did not/barely overlap zero). In particular, building age was shown to be the strongest predictor. This might be due to a combination of factors, including infrastructural degradation and the water storing practices associated with the sociodemographic profile of residents that result in more instances of water stagnation that can breed mosquitoes. For both species, we estimated that an increase in the managed vegetation cover within the buffer area was associated with a substantial rise in the mean vector abundance, likely owing to the increased availability of water in leaf axils, leaf litter, and discarded receptacles hidden in foliage and tree holes, which supports mosquito breeding. Unlike managed vegetation cover, both forest and grass covers were found to be negatively associated with the abundance of Ae. aegypti. This was not unexpected given that Ae. aegypti prefers highly urbanized areas where it can breed in artificial containers. On the other hand, there was a positive association between forest cover and Ae. albopictus abundance based on the point estimate, which is consistent with the existing knowledge of the vector's ecology [28]. Our analysis showed a borderline significant negative association between drain line density and the abundance of Ae. aegypti in contrast to the positive correlation reported by Seidahmed et al. [29]. This discrepancy can be due to a number of reasons: for example, the study conducted by Seidahmed et al. was restricted to a small area of Singapore, and results may be confounded by the different housing types with different demography [29], whereas this study used 2-year data collected from the same type of housing areas across the island. Moreover, the number of Aedes mosquitoes caught inside the high-rise residential buildings and the number of outdoor breeding habitats can be impacted by drain line density in different ways. Since perimeter drains are known to be the most common breeding habitats of Aedes in Singapore's public areas according to routine inspections [30], their abundance could simultaneously decrease the per-mosquito probability of looking for oviposition sites inside residential buildings and increase the total number of Aedes mosquitoes in the nearby public area. Thus, our estimate is likely to be a reflection of the resulting net effect and similarly for the estimated effects of other spatial covariates such as nearby managed vegetation cover. Previous work has highlighted the challenges of mapping the spatial distribution of Aedes mosquitoes for operational dengue vector control [31]. In particular, predictors that can be informative across an entire continent or a sufficiently large country may lose predictive power within the confines of a single city [31]. While we did observe substantial evidence of a non-zero association between many spatial covariates and Aedes abundance in our analysis, the estimated standard deviations of the spatial random effects remained large, suggesting substantial unexplained heterogeneity across space. Hence, entomological surveillance remains critical to generating knowledge of Aedes abundance in the field to inform vector control. On the other hand, we found that in most cases the model performed significantly better than the baseline at predicting mosquito abundance at new locations, based on the spatial variables at those locations, suggesting that statistical modeling can still serve as a complementary tool to refine our understanding of the Aedes abundance at locations where entomological data are unavailable and hence to identify additional locations that may require enhanced vector control. It should be noted that the model's improvement in prediction accuracy over the baseline was found to be smaller for Ae. aegypti than Ae. albopictus, despite the former being the most important dengue vector in Singapore. This finding may be explained by the ecology of Ae. aegypti, which is a container breeder that is subject to the vagaries of human behavior; adherence to household practices to prevent breeding is hard to measure and may vary spatially, rendering spatial modeling much more challenging. There is abundant literature on how different weather variables regulate the population dynamics of Aedes via influencing mosquito habitat availability, development, survival, and reproduction [9]. In this study, however, we estimated the effects of all the lagged weather variables on the observed Aedes abundance to be very minimal, which can be owing to the restricted range of the weather variables in the context of Singapore as well as vector control activities that were typically ramped up during higher breeding seasons. The assessment of the out-of-sample forecast errors, too, shows that the additional inclusion of weather covariates did not improve the accuracy of the Aedes abundance forecasts, and a simple model with autoregressive terms alone could yield a modest and statistically significant improvement in the 2- and 3-week-ahead forecasts over the baseline. While these results may suggest that weather had a negligible impact on Aedes abundance in Singapore, it should be noted that this study was conducted in public housing estates and detected mosquitoes that may have hatched nearby, and the relationship between weather and outdoor breeding habitats may be more complex than was identified herein. A longitudinal entomological study in the Geylang neighborhood of Singapore found that the outdoor Aedes population was likely to be shaped by rainfall through a monsoon-driven sequence of flushing, drying and return of breeding habitats [32]. Taken together, these results suggest the differential impacts of weather on the Aedes population dynamics and hence the potential risk of exposure to mosquito bites in different settings, i.e. Aedes abundance inside and nearby public housing estates may be less sensitive to changes in weather compared with outdoor abundance. Our results need to be interpreted in the light of the following limitations. First, we were unable to account for the effects of vector control programs on the observed Aedes abundance across space and time. Regulatory inspections and community efforts aimed at removing larval habitats, as well as chemical control to reduce adult mosquito populations, usually peak during higher vector breeding/dengue transmission seasons, and this could to some extent mask the true association between different weather variables and the Aedes abundance, with the confounding bias difficult to quantify or adjust for. Besides, there could be a residual spatial dependence structure in our data due to factors such as potential ongoing expansion of Aedes mosquitoes, which may have been coincidentally absorbed by certain spatial covariates because of confounding. Nonetheless, this issue was assessed via a leave-one-planning-area-out cross-validation framework with baseline predictions created to serve as a benchmark for model performance comparison, and results suggest that the spatial covariates could indeed enhance the out-of-sample predictive accuracy. In addition, our parameter estimates were derived based on mosquito data collected from Singapore's high-rise public residential zones and thus should not be used for extrapolation to low-rise houses. Previous work has suggested that the risks of indoor breeding of Aedes mosquitoes could be highly dependent on the accommodation type in Singapore [29]. In early 2020, the National Environment Agency extended the deployment of Gravitraps to private landed residential estates [33], and as more data are being generated, this will shed further light on how Aedes abundance differs between high- and low-rise residential zones across the island. Our study has demonstrated the potential and challenges of spatio-temporal modeling for improving the understanding of the main dengue vectors' ecology and provided empirical evidence to guide the refinement of vector control strategies in the context of Singapore. Our findings suggest that public residential estates with older buildings and more nearby managed vegetation should be prioritized for vector control inspections and community advocacy to reduce the abundance of Aedes mosquitoes and the risk of dengue transmission. The insights obtained from this study could also be helpful to inspire future studies that attempt to understand the spatio-temporal dynamics of the dengue vector population at a city scale, particularly in settings where entomological surveillance data are less abundant and thus require modeling to further narrow the knowledge gap. The datasets used and analyzed during the current study are available from the corresponding authors upon reasonable request and with the permission of the National Environment Agency or Singapore-ETH Centre. World Health Organization (2009) Dengue: guidelines for diagnosis, treatment, prevention, and control. WHO/HTM/NTD/DEN/20091. Bhatt S, Gething PW, Brady OJ, et al. The global distribution and burden of dengue. Nature. 2013;496:504–7. World Health Organization (2018) Questions and Answers on Dengue Vaccines. In: Immunization, Vaccines Biol. https://www.who.int/immunization/research/development/dengue_q_and_a/en/. Accessed 30 Jun 2020. Fitzpatrick C, Haines A, Bangert M, Farlow A, Hemingway J, Velayudhan R. An economic evaluation of vector control in the age of a dengue vaccine. PLoS Negl Trop Dis. 2017;11:1–27. Liew C, Curtis CF, Agency NE, Liew C, Curtis CF. Horizontal and vertical dispersal of dengue vector mosquitoes, Aedes aegypti and Aedes albopictus, in Singapore. Med Vet Entomol. 2004;18:351–60. Brady OJ, Hay SI (2020) The Global Expansion of Dengue: How Aedes aegypti Mosquitoes Enabled the First Pandemic Arbovirus. Annu Rev Entomol 65:annurev-ento-011019-024918. Brady OJ, Golding N, Pigott DM, Kraemer MUG, Messina JP, Reiner Jr RC, Scott TW, Smith DL, Gething PW, Hay SI (2014) Global temperature constraints on Aedes aegypti and Ae. albopictus persistence and competence for dengue virus transmission. Parasit Vectors 7:338. Vanwambeke SO, Somboon P, Harbach RE, Isenstadt M, Lambin EF, Walton C, Butlin RK. Landscape and land cover factors influence the presence of Aedes and Anopheles Larvae. J Med Entomol. 2007;44:133–44. Morin CW, Comrie AC, Ernst K. Climate and Dengue Transmission: evidence and Implications. Environ Health Perspect. 2013;121:1264–72. Kraemer MUG, Sinka ME, Duda KA, et al. The global distribution of the arbovirus vectors Aedes aegypti and Ae. Albopictus. Elife. 2015;4:1–18. Dickens BL, Sun H, Jit M, Cook AR, Carrasco LR (2018) Determining environmental and anthropogenic factors which explain the global distribution of Aedes aegypti and Ae. albopictus. BMJ Glob Heal 3:e000801. Kraemer MUG, Reiner RC, Brady OJ, et al. Past and future spread of the arbovirus vectors Aedes aegypti and Aedes albopictus. Nat Microbiol. 2019;4:854–63. Reiner RC, Stoddard ST, Vazquez-Prokopec GM, et al. Estimating the impact of city-wide Aedes aegypti population control: an observational study in Iquitos Peru. PLoS Negl Trop Dis. 2019;13:e0007255. Lee KS, Lo S, Tan SSY, Chua R, Tan LK, Xu H, Ng LC. Dengue virus surveillance in Singapore reveals high viral diversity through multiple introductions and in situ evolution. Infect Genet Evol. 2012;12:77–85. Tan LK, Low SL, Sun H, et al. Force of infection and true infection rate of dengue in singapore: implications for dengue control and management. Am J Epidemiol. 2019;188:1529–38. Lee C, Vythilingam I, Chong CS, Abdul Razak MA, Tan CH, Liew C, Pok KY, Ng LC. Gravitraps for management of dengue clusters in Singapore. Am J Trop Med Hyg. 2013;88:888–92. Sim S, Ng LC, Lindsay SW, Wilson AL. A greener vision for vector control: the example of the Singapore dengue control programme. PLoS Negl Trop Dis. 2020;14:e0008428. National Envronmental Agency (2019) NEA Urges Continued Vigilance In Fight Against Dengue. https://www.nea.gov.sg/media/news/news/index/nea-urges-continued-vigilance-in-fight-against-dengue-2019. Accessed 14 Oct 2019. Ong J, Chong C-S, Yap G, Lee C, Abdul Razak MA, Chiang S, Ng L-C. Gravitrap Deployment for Adult Aedes aegypti Surveillance and its Impact on Dengue Cases. PLoS Negl Trop Dis. 2020;14:e0008528. Govtech (2020) Data.gov.sg. https://data.gov.sg/. Accessed 30 Jun 2020. Ong J, Liu X, Rajarethinam J, Yap G, Ho D, Ng LC. A novel entomological index, Aedes aegypti Breeding Percentage, reveals the geographical spread of the dengue vector in Singapore and serves as a spatial risk indicator for dengue. Parasit Vectors. 2019;12:17. Richards DR, Tunçer B. Using image recognition to automate assessment of cultural ecosystem services from social media photographs. Ecosyst Serv. 2017. https://doi.org/10.1016/J.ECOSER.2017.09.004. OpenStreetMap. https://www.openstreetmap.org. Housing & Development Board Resale Flat Prices. https://services2.hdb.gov.sg/webapp/BB33RTIS/BB33PReslTrans.jsp. Accessed 11 Sep 2017. Brady OJ, Johansson MA, Guerra CA, et al. Modelling adult Aedes aegypti and Aedes albopictus survival at different temperatures in laboratory and field settings. Parasit Vectors. 2013;6:351. Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc Ser B. 2009;71:319–92. The R-INLA project. www.r-inla.org. World Health Organization Dengue control-The mosquito. https://www.who.int/denguecontrol/mosquito/en/. Accessed 30 Jun 2020. Seidahmed OME, Lu D, Chong CS, Ng LC, Eltahir EAB. Patterns of urban housing shape dengue distribution in singapore at neighborhood and country scales. GeoHealth. 2018;2:54–67. National Environment Agency (2019) Know The Potential Aedes Breeding Sites. https://www.nea.gov.sg/dengue-zika/prevent-aedes-mosquito-breeding/know-the-potential-aedes-breeding-sites. Accessed 30 Jun 2020. Eisen L, Lozano-Fuentes S. Use of Mapping and Spatial and Space-Time Modeling Approaches in Operational Control of Aedes aegypti and Dengue. PLoS Negl Trop Dis. 2009;3:e411. Seidahmed OME, Eltahir EAB (2016) A Sequence of Flushing and Drying of Breeding Habitats of Aedes aegypti (L.) Prior to the Low Dengue Season in Singapore. PLoS Negl Trop Dis 10:e0004842. National Environment Agency (2020) NEA Vox. https://www.nea.gov.sg/media/nea-vox/index/why-is-nea-placing-mosquito-traps-outside-my-house-will-this-increase-my-chances-of-being-bitten-by-mosquitoes#.XPTELvi-F3o. Accessed 30 Jun 2020. We thank the following people who have contributed to this study: Staff of Environmental Public Health Department involved in mosquito surveillance and operations, Dr. Chee-Seng Chong and team for their technical assistance, and Lynette Tay for her assistance in data acquisition. ARC and BLD are supported by the National Medical Research Council through the Singapore Population Health Improvement Centre (NMRC/CG/C026/2017_NUHS). LRC is supported by the CDPHRG grant (CDPHRG14NOV007). The funders had no role in the design of the study, the collection, analysis and interpretation of data, or in writing the manuscript. Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, Singapore, 117549, Republic of Singapore Haoyang Sun, Borame L Dickens, Jue Tao Lim & Alex R. Cook Natural Capital Singapore, Singapore-ETH Centre, ETH Zurich, Singapore, Singapore Daniel Richards Environmental Health Institute, National Environment Agency, Singapore, Singapore Janet Ong, Jayanthi Rajarethinam, Joel Aik, Grace Yap & Lee Ching Ng Centre for Climate Research Singapore, Meteorological Service Singapore, National Environment Agency, Singapore, Singapore Muhammad E. E. Hassim Department of Biological Sciences, National University of Singapore, Singapore, Singapore L. Roman Carrasco School of Biological Sciences, Nanyang Technological University, Singapore, Singapore Lee Ching Ng Haoyang Sun Borame L Dickens Janet Ong Jayanthi Rajarethinam Jue Tao Lim Joel Aik Grace Yap Alex R. Cook LRC, ARC, LCN, BLD, and HS conceptualized and designed the study. HS, BLD, DR, JO, JR, and MH contributed to data collection/acquisition and processing. HS carried out the analysis and wrote the first draft of the manuscript. BLD and JTL contributed to results visualization. All authors contributed to the results interpretation, revision of the manuscript, and approved it before submission. All authors read and approved the final manuscript. Correspondence to Haoyang Sun or Alex R. Cook. The permission to use the entomological data was approved by the National Environment Agency, Singapore. Additional file 1: Figures. Sun, H., Dickens, B.L., Richards, D. et al. Spatio-temporal analysis of the main dengue vector populations in Singapore. Parasites Vectors 14, 41 (2021). https://doi.org/10.1186/s13071-020-04554-9 Spatio-temporal modeling Dipteran vectors and associated diseases
CommonCrawl
Song, Chung-Kun (13) Khang, Gil-Son (11) Choi, Jong-Sun (9) Lee, Hai-Bang (8) Kang, Chang-Heon (6) Myeong, Jae-Min (6) Jeong, Je-Kyo (5) Kim, Nowon (5) Lee, Beom-Jin (5) Lee, N.E. (5) Lee, Tae-Il (5) Mun, Gyeong-Ju (5) Xu, Yong-Xian (5) Ahn, Jong-Gwan (4) Gong, Myoung-Seon (4) Gong, Su-Cheol (4) Kang, Sang Wook (4) Kim, Jong-Hak (4) Kim, Nam-Jin (4) Kim, No-Won (4) Lee, Jong-Hyun (4) Lee, Nae-Eung (4) Park, Jae-Hoon (4) Park, Sung-Seek (4) Ryu, Gi-Seong (4) Seol, Y.G. (4) Seol, Yeong-Guk (4) Ahn, Yong-San (3) Baek, Kyu-Ha (3) Chang, Ho-Jung (3) Cho, Kyoung-Ah (3) Cho, Sun-Hang (3) Choe, Ji-Hyeok (3) Choi, Seong-Ho (3) Gang, Yong-Cheol (3) Ha, Tai-You (3) Han, Sang-Beom (3) Hong, Mun-Pyo (3) Hwang, Seong-Hwan (3) Hwang, Sung-Joo (3) Hyung, Gun-Woo (3) Jang Ji-Geun (3) Jeong, Sung-Hoon (3) Kang, Yong-Cheol (3) Kang, Yong-Soo (3) Kim, Chang-Keun (3) Kim, Doo-Hyun (3) Kim, Han-Woong (3) Kim, In-Hwan (3) Kim, Jin-U (3) Kim, Sang-Sig (3) Kim, Se-Min (3) Kim, Tae-Wan (3) Kim, Young-Kwan (3) Koo, Ja-Ryong (3) Kwak, Ki-Yeol (3) Kwon, Austin (3) Lee, Deuk-Yong (3) Lee, H.J. (3) Lee, Jong-Hyuk (3) Lee, Jong-Min (3) Lee, Kwang-Pill (3) Lee, Min-Hwa (3) Lee, Sang-Hun (3) Lee, Se-Jong (3) No, Yong-Han (3) Park, Hyung-Ho (3) Park, Ju-Yeon (3) Park, Kyung-Won (3) Pourali, Ali Reza (3) Sohn, Young-Taek (3) Song, Jae-Wook (3) Sung, A-Young (3) Ahn, Hyo-Jin (2) An, Eoung-Jin (2) Byun, Hyun-Sook (2) Cao, Qing-Ri (2) Chang Ho-Jung (2) Chang, Jong-Hyeon (2) Chang, Ka-Young (2) Char, Kook-Heon (2) Cho, Yu Song (2) Choa, Yong-Ho (2) Choe, Ki-Beom (2) Choi, Choon-Young (2) Choi, Ho-Suk (2) Do, Lee-Mi (2) Do, Young-Ho (2) Go, Seong-Wi (2) Goak, Jeung-Choon (2) Gopalan, A. (2) Han, Jin-Woo (2) Han, Jong-Hun (2) Han, Jung-Geun (2) Heo, Uk (2) Hong, Hyun-Ki (2) Hong, Keum-Duck (2) Hong, Sungwook (2) Huh, Kang-Moo (2) Department of Environmental Engineering, Dong-Eui University (7) Department of Advanced Organic Materials Engineering, Chonbuk National University (4) Department of Chemistry, Pukyong National University (4) Department of Chemistry, Sangmyung University (4) Department of Electronics Engineering, Dankook University (4) Dept. of Materials Science and Engineering, Hongik University (4) KETI (4) College of Pharmacy, Chungnam National University (3) Department of Chemical Engineering, Inha University (3) Department of Chemistry Graduate School, Kyungpook National University (3) Department of Materials Science and Engineering, Korea University (3) Department of Neurosurgery, Gwangju Saewoori Spine Hospital (3) Department of Polymer Nano Science and Technology, Chonbuk National University (3) Department of Polymer Science and Technology, Chonbuk National University (3) Department of Textile Business, Bucheon University (3) Dept. of Electrical, Information & Control Engineering, Hongik University (3) Dept. of Information Display, Hongik University (3) School of Advanced Materials Science & Engineering and Center for Advanced Plasma Surface Technology, Sungkyunkwan University (3) School of Chemical Engineering and Materials Science, Chung-Ang University (3) Biological Rhythm and Controlled Release Lab., College of Pharmacy, Kangwon National University (2) College of Pharmacy, Dongduck Women's University (2) College of Pharmacy, Pusan National University (2) Department of Advanced Fiber Engineering, Division of Nano-Systems, Inha University (2) Department of Applied Chemical Engineering, Dankook University (2) Department of Ceramics Engineering, Yonsei University (2) Department of Chemical Engineering, Chungbuk National University (2) Department of Chemical Engineering, Hanyang University (2) Department of Chemical and Biomolecular Engineering, Yonsei University (2) Department of Chemistry, College of Natural Science, Dankook University (2) Department of Chemistry, Hannam University (2) Department of Chemistry, Kongju National University (2) Department of Chemistry, Kookmin University (2) Department of Chemistry, Sogang University (2) Department of Civil Engineering, Chung-ang Univ. (2) Department of Display Semiconductor Physics, Korea University (2) Department of Electrical Engineering, Korea University (2) Department of Food Science & Technology, Tae-Gu University (2) Department of Food Science and Technology, Kyungsung University (2) Department of Fusion Chemical Engineering, Hanyang University (2) Department of Materials Engineering, Daelim College of Technology (2) Department of Materials Engineering, Korea Aerospace University (2) Department of Materials Science & Engineering, Seoul National University of Science & Technology (2) Department of Materials Science & Engineering, Seoul National University of Science and Technology (2) Department of Materials Science and Engineering, Hanyang University (2) Department of Materials Science and Engineering, Seoul National University of Science and Technology (2) Department of Mechanical Engineering, Jeju National University (2) Department of Physics and Center for Nanospinics of Spintronic Materials, Korea Advanced Institute of Science and Technology (2) Department of Polymer Science and Engineering, Chungnam National University (2) Department of Polymer Science and Engineering, Inha University (2) Dept. of Electrical & Control Engineering, Hongik University (2) Dept. of Electrical and control Eng., Hongik Univ. (2) Dept. of Electronics Eng., Dong-A University (2) Dept. of Electronics Engineering, Chungnam National Univ. (2) Division of Chemical and Bio. Engineering, Hanyang University (2) Duck San Enterprise Co. Ltd. (2) Electronic Materials Center, Korea Institute of Science and Technology (2) Electronics and Telecommunications Research Institute (2) Faculty of Nanotechnology and Advanced Materials Engineering, Sejong University (2) Infertility Medical Center of CHA General Hospital (2) Institute of Industrial Technology, Dongeui University (2) Institute of Nano Sensor Technology, Hanyang University (2) Korea High Tech Textile Research Institute (2) Korea Institute of Ceramic Engineering and Technology (2) Kyungnam College of Information and Technology (2) Materials Business dept. 1, Dongjin Semichem, co. ltd. (2) Nano Organic device Lab. & Dept. of Electronic&Electronics&Computer Eng. Dong-A Univ. (2) Nanobiomaterials Laboratories, Korea Research Institute of Chemical Technology (2) National Research Laboratory for Membranes, School of Chemical Engineering, College of Engineering, Hanyang University (2) Next Generation Enterprise Group, KICET (2) Research Center, Samchundang Pharm. Co. Ltd. (2) School of Agricultural Biotechnology, Seoul National University (2) School of Chemical & Biological Engineering, Seoul National University (2) School of Chemical Engineering & Materials Science, Chung-ang Univ. (2) School of Chemistry, Damghan University (2) School of Civil and Environmental Engineering, Urban Design and Study, Chung-Ang Univ. (2) School of Display Engineering, Hoseo University (2) School of Electrical Engineering, Seoul National University (2) School of Energy.Materials and Chemical Engineering, Korea University of Technology and Education (2) School of information and Communication engineering, Sungkyunkwan University (2) Youngyiel Precision Co. Ltd. (2) Advanced Display Research Center, Kyung Hee University (1) Advanced Materials & Processing Center, Institute for Advanced Engineering (1) Advanced Membrane Technology Research Center (1) Amore-Pacific Co. R&D Center (1) Animal Genetic Resources Station, National Institute of Animal Science, RDA (1) Applied Microbiology Division, National Institute of Agricultural Science and Technology, RDA (1) B&E Tech Co. Ltd. Venture Center, Gwangju university (1) BK-21 Polymer BIN Fusion Research Team, Chonbuk National University (1) BK21 Veterinary Bioscience Research Group, College of Veterinary Medicine, Chungbuk National University (1) Baik Su Pharmaceutical Co., Ltd (1) Bank for Cytokine Research, Chonbuk National University (1) Basic Science Research Institute, Chonbuk National University (1) Basic Semiconductor Research Lab., Electronics and Telecommunications Research Institute (1) Basic Semiconductor Research Lab., Electronics and Telecommunications Research Institute, Department of Polymer Science and Engineering, Inha University (1) Bio-safety Research Institute and College of Veterinary Medicine, Chonbuk National University (1) The Korean Society of Pharmaceutical Sciences and Technology (34) The Korean Infomation Display Society (25) The Korean Institute of Electrical and Electronic Material Engineers (25) Materials Research Society of Korea (22) The Pharmaceutical Society of Korea (22) The Korean Society of Embryo Transfer (14) The Korean Neurosurgical Society (11) The Korea Association of Crystal Growth (10) The Korean Institute of Surface Engineering (10) The Korean Society of Animal Reproduction (10) The Institute of Electronics and Information Engineers (9) The Korean Institute of Electrical Engineers (9) The Korean Microelectronics and Packaging Society (8) The Korean Society of Analytical Sciences (8) Korea Safety Management & Science (6) The Korean Institute of Chemical Engineers (6) The Korean Society Of Semiconductor & Display Technology (6) The Korean Society for Biotechnology and Bioengineering (6) The Korean Sensors Society (5) Korean Society of Microscopy (4) The Korean Magnetics Society (4) The Korean Society for Reproductive Medicine (4) Society of Cosmetic Scientists of Korea (3) The Korea Society for Microbiology (3) The Korean Society of Crop Science (3) The Korean Society of Plant Biotechnology (3) The Society of Air-Conditioning and Refrigerating Engineers of Korea (3) Korean Geosynthetics Society (2) Korean Society of Food Science and Technology (2) Korean Society of Horticultural Science (2) Korean Society of Life Science (2) Korean Society of Nursing Science (2) Optical Society of Korea (2) The Korean Society of Clean Technology (2) The Korean Solar Energy Society (2) Asian Pacific Journal of Cancer Prevention (1) Institute for Liquid Atomization and Spray Systems-Korea (1) KOREA ELECTRIC ASSOCIATION (1) Korea Game Society (1) Korea Robotics Society (1) Korean Acupuncture & Moxibustion Medicine Society (1) Korean Cleft Palate-Craniofacial Association (1) Korean Institute of Information Scientists and Engineers (1) Korean Nuclear Society (1) Korean Society for Mass Spectrometry (1) Korean Society of Animal Sciences and Technology (1) Korean Society of Computer Information (1) Korean Society of Environmental Biology (1) Korean Society of Environmental Engineers (1) Korean Society of Radiological Science (1) Korean Society of Soil and Groundwater Environment (1) Research Institute of Natural Sciences (1) The Basic Science Institute Chosun University (1) The Korea Society of Environmental Restoration Technology (1) The Korean Academy of Tuberculosis and Respiratory Diseases (1) The Korean Environmental Sciences Society (1) The Korean Graphic Arts Communication Society (1) The Korean Home Economics Association (1) The Korean Institute of Building Construction (1) The Korean Ophthalmic Optics Society (1) The Korean Pain Society (1) The Korean Society Of Conservation Science For Cultural Heritage (1) The Korean Society for Bio-Environment Control (1) The Korean Society for Biomedical Laboratory Sciences (1) The Korean Society for Energy (1) The Korean Society of Applied Pharmacology (1) The Korean Society of Developmental Biology (1) The Korean Society of Environmental Toxicology (1) The Korean Society of Food Preservation (1) The Korean Society of Grassland and Forage Science (1) The Korean Society of Laryngology, Phoniatrics and Logopedics. (1) The Korean Society of Medical and Biological Engineering (1) The Korean Society of Medicinal Crop Science (1) The Korean Society of Mushroom Science (1) The Korean Society of Nuclear Medicine Technology (1) The Korean Society of Phycology (1) Journal of Pharmaceutical Investigation (34) Proceedings of the Korean Vacuum Society Conference (28) Polymer(Korea) (21) Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference (15) Korean Journal of Materials Research (13) YAKHAK HOEJI (13) Journal of Embryo Transfer (11) Journal of Korean Neurosurgical Society (11) Journal of the Korean Crystal Growth and Crystal Technology (10) Journal of the Korean Chemical Society (9) Proceedings of the Materials Research Society of Korea Conference (9) Analytical Science and Technology (8) Journal of the Korean Institute of Electrical and Electronic Material Engineers (8) Journal of the Microelectronics and Packaging Society (8) Proceedings of the KIEE Conference (8) Korean Journal of Animal Reproduction (7) Korean Chemical Engineering Research (6) Proceedings of the Safety Management and Science Conference (6) Archives of Pharmacal Research (5) Journal of Sensor Science and Technology (5) Journal of the Institute of Electronics Engineers of Korea SD (5) Journal of the Korean institute of surface engineering (5) Proceedings of the Korean Institute of Surface Engineering Conference (5) Applied Microscopy (4) Journal of the Semiconductor & Display Technology (4) Proceedings of the IEEK Conference (4) Proceedings of the PSK Conference (4) Journal of the Society of Cosmetic Scientists of Korea (3) KOREAN JOURNAL OF CROP SCIENCE (3) Proceedings of the Korean Ceranic Society Conference (3) Proceedings of the Korean Magnestics Society Conference (3) Proceedings of the Korean Society of Embryo Transfer Conference (3) Proceedings of the Polymer Society of Korea Conference (3) The Journal of the Korean Society for Microbiology (3) Clean Technology (2) Clinical and Experimental Reproductive Medicine (2) Horticultural Science & Technology (2) Journal of Korean Academy of Nursing (2) Journal of Life Science (2) Journal of Plant Biotechnology (2) Journal of the Korean Geosynthetics Society (2) Journal of the Korean Solar Energy Society (2) Korean Journal of Food Science and Technology (2) Korean Journal of Optics and Photonics (2) Microbiology and Biotechnology Letters (2) Proceedings of the KSAR Conference (2) Proceedings of the Korean Society Of Semiconductor Equipment Technology (2) Transactions on Electrical and Electronic Materials (2) Advances in environmental research (1) Advances in materials Research (1) Advances in nano research (1) Applied Science and Convergence Technology (1) Archives of Craniofacial Surgery (1) Biomedical Science Letters (1) Communications for Statistical Applications and Methods (1) ETRI Journal (1) Environmental Analysis Health and Toxicology (1) Environmental Engineering Research (1) Fisheries and Aquatic Sciences (1) JOURNAL OF ELECTRICAL WORLD (1) Journal of Acupuncture Research (1) Journal of Animal Science and Technology (1) Journal of Bio-Environment Control (1) Journal of Biomedical Engineering Research (1) Journal of Conservation Science (1) Journal of Energy Engineering (1) Journal of Environmental Science International (1) Journal of Forest and Environmental Science (1) Journal of ILASS-Korea (1) Journal of Information Display (1) Journal of KIISE:Computer Systems and Theory (1) Journal of Korea Game Society (1) Journal of Korean Ophthalmic Optics Society (1) Journal of Korean Society of Water and Wastewater (1) Journal of Magnetics (1) Journal of Mushroom (1) Journal of Plant Biology (1) Journal of Soil and Groundwater Environment (1) Title/Summary/Keyword: PVP Search Result 569, Processing Time 0.231 seconds In Vitro Release of Acetaminophen from Mucoadhesive Microsphere Prepared by Poly(acrylic acid)/poly(vinyl pyrrolidone) Interpolymer Complex Chun, Myung-Kwan;Cho, Chong-Su;Choi, Hoo-Kyun Proceedings of the PSK Conference pp.231.1-231 Mucoadhesive microsphere was prepared by interpo]ymer complexation of po]y(acrylic acid) (PAA) with po]y(vinyl pyrrolidone) (PVP) using solvent diffusion method. The loading efficiency of acetaminophen into the microsphere was 91.3 ${\pm}$ 6.5%. The release rate of acetaminophen from the PAA/PVP complex microspheres was slower than that from PVP microspheres at pH 2.0 and 6.8. The dissolution of microspheres made of the complex was significantly slower than those made of PVP due to H-bond between PVP and PAA. As a result, the release rate of acetaminophen from the complex microspheres was slower than that from PVP microspheres. Organic Thin Film Transistors with Cross-linked PVP Gate Dielectrics by Using Photo-initiator and PMF Yun, Ho-Jin;Baek, Kyu-Ha;Park, Kun-Sik;Shin, Hong-Sik;Ham, Yong-Hyun;Lee, Ga-Won;Lee, Ki-Jun;Wang, Jin-Suk;Do, Lee-Mi 한국정보디스플레이학회:학술대회논문집 We have fabricated pentacene based organic thin film transistors (OTFTs) with formulated poly[4-vinylphenol] (PVP) gate dielectrics. The gate dielectrics is composed of PVP, poly[melamine-coformaldehyde] (PMF) and photo-initiator [1-phenyl-2-hydroxy-2-methylpropane-1-one, Darocur1173]. By adding small amount (1 %) of photo-initiator, the cross-linking temperature is lowered to $115^{\circ}C$, which is lower than general thermal curing reaction temperature of cross-linked PVP (> $180^{\circ}C$). The hysteresis and the leakage current of the OTFTs are also decreased by adding the PMF and the photoinitiator in PVP gate dielectrics. Characteristics of Carbon Nano Fluid Added PVP (PVP가 첨가된 탄소나노유체의 특성에 대한 연구) Seo, Hyang-Min;Park, Sung-Seek;Kim, Nam-Jin In this study, the enhancement of the thermal conductivity of water in the presence of multi-walled carbon nanotubes, MWCNT, was investigated. Sodium Dodecyl Sulfate, SDS, and Polyvinylpyrrolidone, PVP, were employed as the dispersant. SDS or PVP was added in pure water. And then, MWCNT of 0.0005, 0.001, 0.002, 0.003, 0.004, 0.005, 0.01, and 0.02 vol% was dispersed respectively. The thermal conductivity and the viscosity were measured with a transient hot-wire instrument built for this study and the DV II+ Pro viscometer. The results showed that PVP had good thermal conductivity at 300 wt% and this was better than that of SDS 100 wt%, also, the viscosity of nano fluid added PVP rapidly increased until 0.02 vol%. Disssolution Characteristics of Phenobarbital and Phenobarbital-PVP Coprecipitate (Phenobarbital 및 Phenobarbital-PVP 공침물(共沈物)의 용출(溶出)에 관한 연구(硏究)) Shin, Sang-Chul;Lee, Min-Hwa;Kim, Shin-Keun Journal of Pharmaceutical Investigation Phenobarbital의 용출속도(溶出速度)를 증가시키기 위하여 PVP와의 공침물(共沈物)을 형성(形成)한 후 일정(一定)한 표면적(表面積)하에서의 용출속도(溶出速度)를 비교검토(比較檢討)하였다. $37^{\circ}C$, 150r.p.m${\times}$에서의 rate constannt of dissolution, k,는 phenobarbital이 $8.75{\times}10^{-6}M/min$, 1 : 2 phenobarbital-PVP coprecipiate는 $5.35{\times}10^{-5}M/min$이었으며, activation energy of dissolution, Ea는 phenobarbital이 약 10,600cal/mole coprecipitate는 약 5,800cal/mol이었다. 그리고 X-ray diffraction study에 의(依)하면 페노바르비탈 단일물질(單一物質)이나, PVP와의 physical mixture에서는 페노바르비탈의 결정피크를 나타내었으나, PVP와의 공침물(共沈物)의 경우(境遇)에는 페노바르비탈의 결정피크를 인지(認知)할 수 없었다. Electrical Characteristics of Cu2O-PVP Nanofibers Fabricated by Electrospinning (전기방사법으로 제조된 Cu2O-PVP 나노사의 전기적 특성) Kwak, Ki-Yeol;Cho, Kyoung-Ah;Yun, Jungg-Won;Kim, Sang-Sig Journal of the Korean Institute of Electrical and Electronic Material Engineers Hybrid nanofibers made of $Cu_2O$ and polyvinyl pyrrolidone were fabricated by electrospinning on glass substrates. The current magnitude of the $Cu_2O$-PVP hybrid nanofibers is 10 times larger than that of pure PVP nanofibers. In addition, $Cu_2O$-PVP nanofibers possess high sensitivity to air at room temperature than pure PVP nanifibers. https://doi.org/10.4313/JKEM.2009.22.8.650 인용 PDF KSCI The thickness effect on surface and electrical properties of PVP layer as insulator layer of OTFTs (OTFT 소자의 절연층으로써 두께에 따른 PVP 층의 표면 및 전기적 특성) Seo, Choong-Seok;Park, Yong-Seob;Park, Jae-Wook;Kim, Hyung-Jin;Yun, Deok-Yong;Hong, Byung-You Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference In this work, we describe the characterization of PVP films synthesized by spin-coater method and fabricate OTFTs of a bottom gate structure using pentacene as the active layer and polyvinylphenol (PVP) as the gate dielectric on Au gate electrode. We investigated the surface and electrical properties of PVP layer using an AFM method and MIM structure, and estimated the device properties of OTFTs including $I_D-V_D$, $I_D-V_G$, threshold voltage $V_T$, on/off ratio, and field effect mobility. Verification of Bonding Force between PVP Dielectric Layer and PDMS for Application of Flexible Capacitive-type Touch Sensor with Large Dynamic Range (넓은 다이내믹 레인지의 유연 촉각센서 적용을 위한 PVP 유전층과 PDMS 접착력 검증) Won, Dong-Joon;Huh, Myoung;Kim, Joonwon The Journal of Korea Robotics Society In this paper, we fabricate arrayed-type flexible capacitive touch sensor using liquid metal (LM) droplets (4 mm spatial resolution). Poly-4-vinylphenol (PVP) layer is used as a dielectric layer on the electrode patterned Polyethylene naphthalate (PEN) film. Bonding tests between hydroxyl group (-OH) on the PVP film and polydimethylsiloxane (PDMS) are conducted in a various $O_2$ plasma treatment conditions. Through the tests, we can confirm that non-$O_2$ plasma treated PVP layer and $O_2$ plasma treated PDMS can make a chemical bond. To measure dynamic range of the device, one-cell experiments are conducted and we confirmed that the fabricated device has a large dynamic range (~60 pF). https://doi.org/10.7746/jkros.2016.11.3.140 인용 PDF KSCI Electrical Properties of PVP Gate Insulation Film on Polyethersulfone(PES) and Glass Substrates (Polyethersulfone(PES) 및 유리 기판위에 제작된 PVP 게이트 절연막의 전기적 특성) Shin, Ik-Sup;Gong, Su-Cheol;Lim, Hun-Seoung;Park, Hyung-Ho;Chang, Ho-Jung Journal of the Microelectronics and Packaging Society The cpapcitors with MIM(metal-insulator-metal) structures using PVP gate insulation films were prepared for the application of flexible organic thin film transistors (OTFT). The co-polymer organic insulation films were synthesized by using PVP(poly-4-vinylphenol) as a solute and PGMEA(propylene glycol monomethyl ether acetate) as a solvent. The cross-linked PVP insulation films were also prepared by addition of poly(melamine-co-formaldehyde) as thermal hardener. The leakage current of the cross- linked PVP films was found to be about 1.3 nA on Al/PES(polyethersulfone) substrate, whereas, on ITO/ glass substrate was about 27.5 nA indicating improvement of the leakage current at Al/PES substrates. Also, the capacitances of all prepared samples on ITO/glass and Al/PES substrates w ere ranged from 1.0 to $1.2nF/cm^2$, showing very similar result with the calculated capacitance values. Humidity Sensor using Polyvinylpyrrolidone-Coated Mach-Zehnder Interferometer in Planar Lightwave Circuit (폴리비닐피롤리돈이 코팅된 마하젠더 간섭계 기반의 평판형 광도파로 습도센서) Kim, Ju Ha;Kim, Myoung Jin;Jung, Eun Joo;Hwang, Sung Hwan;Lee, Woo Jin;Choi, Eun Seo;Rho, Byung Sup Korean Journal of Optics and Photonics In this paper, the characteristics of a humidity sensor implemented by Mach Zehnder Interferometer (MZI) in a Planar Lightwave Circuit (PLC) have been designed and demonstrated. The humidity outside is detected with polyvinylpyrrolidone (PVP) coated on the etched arm of the MZI. The length of the etched arm is 10 mm and the PVP was coated by dip-coating into the etched region. As the refractive index of the PVP changes with the surrounding humidity, the PVP-coated humidity sensor presented changes in the interferogram depending on RH (Relative Humidity) around the PLC. The measured results show that the proposed humidity sensor works successfully in the range of 30% to 80% of RH. https://doi.org/10.3807/KJOP.2013.24.5.251 인용 PDF KSCI Light Scattering Effect of Incorporated PVP/Ag Nanoparticles on the Performance of Small-Molecule Organic Solar Cells Heo, Il-Su;Park, Da-Som;Im, Sang-Gyu Proceedings of the Korean Vacuum Society Conference Small-molecule organic photovoltaic cells have recently attracted growing attention due to their potential for the low-cost fabrication of flexible and lightweight solar modules. The PVP/Ag nanoparticles were synthesized by the reaction of poly vinylpyrrolidone (PVP) and silver nitrate at $150^{\circ}C$. In the reaction, the size of the nanoparticles was controlled by relative mole fractions between PVP and Ag. The PVP/Ag nanoparticles with various sizes were then spin coated on the patterned ITO glass prior to the deposition of the PEDOT:PSS hole transport layer. The scattering of the incident light caused by these incorporated nanoparticles resulted in an increase in the path length of the light through the active layer and hence the enhancement of the light absorption. This scattering effect increased as the size of the nanoparticles increased, but it was offset by the decrease in total transmittance caused by the non-transparent nanoparticles. As a result, the maximum power conversion efficiency, 0.96% which was the value enhanced by 14% compared to the cell without incorporation of nanoparticles, was obtained when the mole fraction of PVP:Ag was 24:1 and the size of the nanoparticles was 20~40 nm. 5 / 57 pages
CommonCrawl
Analytical Science and Technology (분석과학) The Korean Society of Analytical Science (한국분석과학회) Chemistry > Analytical Chemistry Analytical science and technology is devoted to publication of original and significant research in the fundamental theory, practice and application of analytical and bioanalytical science. Contributors from broad spectrum of research fields such as chemistry, chemical engineering, material science, pharmaceuticals, agriculture, food and feed and environmental science are welcomed. http://acoms.kisti.re.kr/journal.do?method=journalintro&journalSeq=J000037&menuId=0200∫roMenuId=0101 KSCI KCI Antibacterial and phytochemical properties of Aphanamixis polystachya essential oil Rahman, Md. Shahedur;Ahad, Abir;Saha, Subbroto Kumar;Hong, Jongki;Kim, Ki-Hyun 113 https://doi.org/10.5806/AST.2017.30.3.113 PDF KSCI Now a day's rise of new antibiotic resistant bacterial strains is a global threat. Ethnic people of India have been employing Aphanamixis polystachya (Wall.) R. Parker wood extract in healing cancerous wounds. The aim of this study was to evaluate the antimicrobial activity and to identify the medicinally potent chemicals in the essential oil extract of A. polystachya. The antibacterial properties of various organic extracts were evaluated against a range of bacteria (gram-positive and gram-negative bacteria) based on the disc diffusion method and GC-MS based analysis for finding active oil extract components. All extracts of A. polystachya leaves showed potential antibacterial activity, notably ethyl acetate, while petroleum ether extracts revealed highly sensitive activity against all tested bacteria (zones of inhibition ranging from 8.83 to 11.23 mm). In addition, the petroleum ether extract had the lowest MIC value (32 to $256{\mu}g/mL$) against E. coli, S. lutea, X. campestris, and B. subtilis bacteria. The major compounds detected in oil [${\beta}$-elemene (16.04 %), ${\beta}$-eudesmol (12.78 %), ${\beta}$-caryophyllene (19.37 %), ${\beta}$-selinene (11.32 %), elemol (5.76 %), and ${\alpha}$-humulene (5.68 %)] are expected to be responsible for the potent antimicrobial activity. The results of this study offer valuable insights into the potent role of A. polystachya essential oil extract in pharmaceutical and antibiotic research. Comparative analysis of urinary metabolites in methamphetamine self-administrated rats Choi, Boyeon;Kim, Soo Phil;Jang, Choon-Gon;Yang, Chae Ha;Lee, Sooyeun 122 Methamphetamine addiction is a critical issue due to the lack of effective pharmacotherapy and high potential for relapse. Nevertheless, there are no distinct biomarkers for diagnosis or prognosis for methamphetamine addiction. In the present study, a rat model for methamphetamine self-administration was established and alteration of urinary metabolites by methamphetamine addiction was investigated by the targeted metabolite analysis using mass spectrometry. Rat urine samples were collected at three time points (before and after addiction and after extinction) from the methamphetamine-addicted group as well as the age-matched control group. The collected samples were prepared using AbsoluteIDQ p180 kit and analyzed using flow injection analysis (FIA) - or high performance liquid chromatography (HPLC) - tandem mass spectrometry (MS/MS). The levels of lysine, acetylornithine and methioninesulfoxide were distinctively altered depending on the status of metheamphetamine addiction or extinction. In particular, the level of acetylornithine was reversely changed from addiction to extinction, for which further studies could be useful for biomarker discovery or mechanistic studies for methamphetamine addiction. Development of HPLC method for differentiation of three parts of mulberry tree Eom, Ji Hyun;Vu, Thi Phuong Duyen;Cai, Linxi;Zhao, Yan;Li, Hong Xu;Yang, Seo Young;Kim, Young Ho;Kim, Seok Jin;Cho, Hyun So;Bao, Haiying;Chem, Jianbo;Kim, Kyung Tae;Kang, Jong Seong 130 The leaves (Mori Folium; MF), branches (Mori Ramulus; MR), and root bark (Mori Cortex Radicis; MCR) of the mulberry tree have been used as therapeutic herbs for centuries. Existing analytical methods were developed specifically for different parts of the tree and cannot be applied to samples containing a mixture of tree parts. Such method specialization is time-consuming and requires separate identification and quality control of each tree part. This report describes an HPLC method for the simultaneous quality control and discrimination of MF, MR, and MCR using four marker compounds: rutin, kuwanon G, oxyresveratrol, and morusin. An Optimapak $C_{18}$ column ($4.6{\times}250mm$, $5{\mu}m$) was used with a gradient elution of 0.1 % formic acid in water and acetonitrile. The flow rate was 1.0 mL/min and the detection wavelength was 270 nm. In quantitative analyses of the three parts, rutin (0.11 % w/w) was detected only in MF. The oxyresveratrol content (0.12 % w/w) was highest in MR. Kuwanon G (0.33 % w/w) and morusin (0.18 % w/w) were higher in MCR than in other parts. The HPLC method given herein can be used to simultaneously classify and quantify three herbal medicines from the mulberry tree. Effect of microwave irradiation on lipase-catalyzed reactions in ionic liquids An, Gwangmin;Kim, Young Min;Koo, Yoon-Mo;Ha, Sung Ho 138 Microwave-assisted organic synthesis has gained a remarkable interest over the past years because of its advantages - (i) rapid energy transfer and superheating, (ii) higher yield and rapid reaction, (iii) cleaner reactions. Ionic liquids are well known for their unique properties such as negligible vapor pressure and high thermal stability. With these properties, ionic liquids have gained increasing attention as green, multi-use reaction media. Recently, ionic liquids have been applied as reaction media for biocatalysis. Lipase-catalyzed reactions in ionic liquids provide high activity and yield compared to conventional organic solvents or solvent free system. Since polar molecules are generally good absorbent to microwave radiation, ionic liquids were investigated as reaction media to improve activity and productivity. In this study, therefore, the effect of microwave irradiation in ionic liquids was investigated on lipase catalyzed reactions such as benzyl acetate synthesis and caffeic acid phenethyl ester synthesis. Comparing to conventional heating, microwave heating showed almost the same final conversion but increased initial reaction rate (3.03 mM/min) compared to 2.11 mM/min in conventional heating at $50^{\circ}C$. Ginsenosides analysis in the crude saponin fraction extracted from Korean red ginseng, and its efficacious analysis against acute pulmonary inflammation in mice Lee, Seung Min;Lim, Heung Bin 146 In this study, we isolated ginseng crude saponin (GCS) from Korean red ginseng (KRG) and determined the ginsenoside content in it to investigate the physiological and pathological effects of GCS on acute pulmonary inflammation induced by intratracheal instillation of cigarette smoke condensates (CSC) and lipopolysaccharide (LPS) solution in BALB/c mice. GCS was orally administered at doses of 10 mg/kg and 25 mg/kg for 3 weeks. The recovery rate of GCS from KRG was 6.5 % and total ginsenosides from GCS was 1.13 %, and the content of Rb1 was the highest among them. Total inflammatory cells in the lung homogenates and bronchoalveolar lavage fluid (BALF) increased following intratracheal administration of CSC and LPS. However, GCS administration impaired this increase. Furthermore, it inhibited the increase in leukocytes in the blood, considerably decreased neutrophils in BALF, and declined infiltration of inflammatory cells and deposition of collagen in the tracheal and alveolar tissue. In this study, GCS was found to have a protective effect against acute pulmonary inflammation and it may be beneficial in preventing various respiratory diseases. Development of HPLC assay method of fusidate sodium tablets Lee, GaJin;Choi, Min;Truong, Quoc-Ky;Mai, Xuan-Lan;Kang, Jong-Seong;Woo, Mi Hee;Na, Dong-Hee;Chun, In-Koo;Kim, Kyeong Ho 154 The Korean Pharmacopoeia (KP XI), British Pharmacopoeia (BP 2013) and Japanese Pharmacopoeia contain monographs for the quality control of raw fusidate sodium and its formulations using high performance liquid chromatography (HPLC). However, the assay method for the determination of fusidate sodium in commercial tablets is titration which is less specific than HPLC. In this study, we present an alternative HPLC method for quantitation of fusidate sodium in tablets. Method validation was performed to determine linearity, precision, accuracy, system suitability, and robustness. The linearity of calibration curves in the desired concentration range was high ($r^2=0.9999$), while the RSDs for intra- and inter-day precision were 0.25-0.37 % and 0.11-0.60 %, respectively. Accuracies ranged from 99.46-100.85 %. Since the system suitability, intermediate-precision and robustness of the assay were satisfactory, this method will be a valuable addition to the Korean Pharmacopoeia (KP XI).
CommonCrawl
Computer Graphics Stack Exchange is a question and answer site for computer graphics researchers and programmers. It only takes a minute to sign up. Conflicting definitions for the distribution of normals $D$ in microfacet BSDFs Please do not confuse this question with this one. In Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs, Eric Heitz defines the distribution of normals as. There, the footnote. Even the footnote description is confusing "per square meter of the geometric surface" and then it is in the numerator. Everyone else, before and after this well-cited paper still persists that the distribution is $1/\mathbf{sr}$, which creates different issues with their math resolved through sketchy corrective expressions and language. With some people avoiding the subject with intentionally unclear language (PBR book). I can buy the argument and the definition, but the author handwaves the notion of said area everywhere. Even defines the surface as being 1 meter squared to justify relegating it to a cosine normalization factor. In no capacity does $dA$ come up which implicitly makes all the equations result in radiant intensity, not radiance. (I know the author says it's projected area, but a cosine alone is not projected area, making the area implicit is a cop out at the lack of area to cancel out because it's supposedly encoded by $D$). On the third side of the fence, Matt Pharr can't decide which side he's on, adding in murky $dA$ in the derivation of the Torrance-Sparrow microfacet model, to cancel it out. He also suggests one needs to use the projected solid angle while proving normalization, while nobody else does. Who is right? I am beyond confused and frustrated at this point. mathematics brdf pbr microfacet pbrt GroundGlassUnknown GroundGlassUnknownGroundGlassUnknown $\begingroup$ Does this article help? reedbeta.com/blog/hows-the-ndf-really-defined $\endgroup$ – Nathan Reed $\begingroup$ While I understand your frustration to some extent, you cannot simply assume bad faith from these well-respected authors just because you do not understand something. As is, your post is demeaning and does not encourage us to solve your problem at all. Please consider rewording your question to remain objective. $\endgroup$ – Hubble $\begingroup$ I hope the link that Nathan Reed gave you could solve your problem. As @Hubble already said, your words are a bit harsh and you should rephrase your question to be more neutral towards the author of the book. $\endgroup$ – wychmaster ♦ The units of the NDF are tricky. For whatever it's worth, Heitz's convention of defining it relative to a 1 m² reference geometric surface is unusual, and although I can see why he would want to define it that way for conceptual simplicity, it does not really match how NDFs are used in practice. I definitely had a head-tilt moment when I first read that passage in his paper. The problem with NDFs is that they involve two different area measures: a macrosurface (or "geometric") area and a microsurface area. As far as I understand, the true and complete definition of the units of the NDF is: $$ \frac{\text{micro area}}{\text{solid angle} \cdot \text{macro area}} $$ In other words, if we double-integrate the NDF over a region of solid angle $\Omega$ and a region of macrosurface $A$, we obtain the area of the microsurface that has normals within that region of solid angle: $$ \int_\Omega \int_A D(\omega)\,dA\,d\omega = \text{total area of microfacets above $A$ with normals in $\Omega$} $$ Note that this is dimensionally consistent, even maintaining the distinction between the two area measures. The $dA$ is an element of macro area, and $d\omega$ an element of solid angle, so those cancel out the denominator of $D(\omega)$. The numerator is then micro area, which matches the right hand side. Now if we forget the distinction between these two area measures and cancel the area units against each other, we obtain just $1/\text{sr}$, which is consistent with the vast majority of literature. But there is a subtlety here. Usually, when we see a distribution that has the units of $1/\text{sr}$, this means it's a distribution over the sphere and that if you integrate it over the sphere you should get 1. But note that if you integrate the NDF over the sphere, you don't in general get 1. Instead, as the above definition shows, you get the overall ratio of micro area to macro area. For a rough surface, this is greater than 1 (and the rougher it is, the larger the value), as the microsurface is crinkly and rough and therefore has more area than the corresponding macrosurface. The correct normalization condition for an NDF is $$ \int_{S^2} D(\omega) (n \cdot \omega) \, d\omega = 1 $$ (as can be readily verified if you take any of the usual NDF formulas, set a nonzero roughness, and try numerically integrating it—they don't come out to 1 unless you use the above integrand). The reason for the $(n \cdot \omega)$ factor in the integrand is to convert microsurface area into the corresponding (projected) macrosurface area. In other words, this factor has the units: $$ n \cdot \omega = \frac{\text{macro area}}{\text{micro area}} $$ With this, the area units of the NDF really do cancel properly and you're left with a true spherical distribution that integrates to 1. Now why does Heitz say something different? He wants to eliminate the "macro area" factor by setting it to a standardized value of 1 m², then defining the NDF as simply $\text{micro area}/\text{solid angle}$. Thus its units become $\text{m}^2 / \text{sr}$, and the normalization condition becomes $$ \int_{S^2} D(\omega) (n \cdot \omega) \, d\omega = 1\ \text{m}^2 $$ (cf. equation (9) in the paper). It's no longer a dimensionless scalar 1, but an area of 1 square meter. I don't love this definition. On the one hand, it does make it really explicit that an NDF involves area "somehow", and isn't simply a spherical distribution. It avoids the somewhat bizarre situation of having two area measures whose units cancel, but whose values don't cancel. On the other hand, in real life we obviously don't apply NDFs only to flat surfaces of 1 m². We want to apply NDFs to surface patches of any size we like—such as the patch enclosed in a pixel when rendering. The obvious way to do that is by normalizing out the 1 m², so that the NDF is expressed as a fractional area relative to any desired patch size. And then you get back to the definition I stated at the beginning of this answer. Indeed, even Heitz isn't really consistent with this choice, as you noted. Starting from equation (14) in the paper, he drops the implicit 1 m² factors that ought to be there. When he proceeds to define the distribution of visible normals, $D_{\omega_o}(\omega_m)$, he no longer says that the units are $\text{m}^2/\text{sr}$, but just the usual $1/\text{sr}$ (cf. Table 2 in the paper). To be fair, Heitz's distribution of visible normals is defined in such a way that it is a legitimate spherical distribution by itself, with the usual normalization condition: $$ \int_{S^2} D_{\omega_o}(\omega_m) \, d\omega_m = 1 $$ (cf. equation (18) in the paper), so it is no longer necessary to think about some hidden area units in order to understand the meaning of this function. Arguably BRDFs and NDFs should have been defined this way as well all along, incorporating the $(n \cdot \omega)$ factor into the function, so that they would be legitimate spherical distributions and we could avoid all this hassle. Sadly, due to historical path-dependence we didn't end up there. So, to sum up, the oddities of the NDF have their origin in the (IMO under-appreciated and under-explained) fact that it has two different area measures hiding in it. If you just look at the units, they appear to cancel out, but nevertheless you need to keep the areas in mind in order to understand NDFs properly. Nathan ReedNathan Reed Thanks for contributing an answer to Computer Graphics Stack Exchange! Not the answer you're looking for? Browse other questions tagged mathematics brdf pbr microfacet pbrt or ask your own question. Computer Graphics is graduating! Why does the integral of NDF over a solid angle equals the area where micronormals belong to that angle? Reasons of the assumptions for the microfacet distribution function? Compensation for energy loss in single-scattering microfacet BSDF models Microfacet shading for diffuse materials Why is the symbol for solid angle a small omega in the definition of the BRDF? Is spectral response curve and spectral power distribution the same thing? What is the terminology for the brightest point on a plane How is the maximum value for alpha (roughness == 1) decided for microfacet models?
CommonCrawl
Vol. 14, Issue 6 pp.1225-1589 Vol. 14, Issue 4 pp.799-1016 Vol. 9, Issue 6 pp.1289-1549 Vol. 9, Issue 4 pp.775-1034 Vol. 9, Issue 1 pp.1-232 A New Explicit Immersed Boundary Method for Simulation of Fluid-Solid Interactions B. Harikrishnan, Zhen Chen & Chang Shu 10.4208/aamm.OA-2020-0106 Adv. Appl. Math. Mech., 13 (2021), pp. 261-284. A new Explicit Immersed Boundary method (IBM) is presented in this work by analyzing and simplifying the system of equations developed from the implicit boundary condition-enforced immersed boundary method. By this way, the requirement to solve the matrix system has been bypassed. It makes the solver be computationally less expensive, especially when large number of Lagrangian points are used to represent the solid boundary. The lattice Boltzmann Flux solver (LBFS) was chosen as the flow solver in this paper as it combines the advantages of both Lattice Boltzmann (LB) solver and Navier-Stokes solver. However, it should be indicated that the new IBM can be incorporated into any flow solver. Comprehensive validations demonstrate that the new explicit scheme bears comparable numerical accuracy as the previous implicit IBM when having a geometry with curvature. The new method is computationally much more efficient than the previous method, especially for moving boundary problems. A Linearized Difference Scheme for Time-Fractional Sine-Gordon Equation Zhiyong Xing & Liping Wen In this paper, a linearized difference scheme is proposed for the Sine-Gordon equation (SGE) with a Caputo time derivative of order $\alpha\in(1,2)$. Comparing with the existing linearized difference schemes, the proposed numerical scheme is simpler and easier for theoretical analysis. The solvability, boundedness and convergence of the difference scheme are rigorously established in the $L_{\infty}$ norm. Finally, several numerical experiments are provided to support the theoretical results. A Kernel-Independent Treecode for General Rotne-Prager-Yamakawa Tensor A particle-cluster treecode based on barycentric Lagrange interpolation is presented for fast summation of hydrodynamic interactions through general Rotne-Prager-Yamakawa tensor in 3D. The interpolation nodes are taken to be Chebyshev points of the 2nd kind in each cluster. The barycentric Lagrange interpolation is scale-invariant that promotes the treecode's efficiency. Numerical results show that the treecode CPU time scales like $\mathcal{O}(N \log N)$, where $N$ is the number of beads in the system. The kernel-independent treecode is a relatively simple algorithm with low memory consumption, and this enables a straightforward OpenMP parallelization. Unconditional Stability and Error Estimates of the Modified Characteristics FEM for the Time-Dependent Viscoelastic Oldroyd Flows Yang Yang, Yanfang Lei & Zhiyong Si In this paper, our purpose is to study the unconditional stability and convergence of characteristics finite element method (FEM) for the time-dependent viscoelastic Oldroyd fluids motion equations. We deduce optimal error estimates in $L^2$ and $H^1$ norm. The analysis is based on an iterated time-discrete system, with which the error function is split into a temporal error and a spatial error. Finally, numerical results confirm the theoretical predictions. Temperature Effect on the Fundamental Breakdown Mechanism of Mack Mode Disturbances in Hypersonic Boundary Layers Jiakuan Xu & Jianxin Liu In hypersonic boundary layers, Mack mode is the most unstable mode and its secondary instability is a hot research topic on the laminar-turbulent transition. Understanding the mechanism of a secondary instability is very important to delay/promote turbulence generation. In this paper, we focus on the main routes of secondary instability to turbulence in hypersonic flows, including fundamental breakdown and subharmonic breakdown, especially the former one. Through the linear and non-linear stability analyse and secondary instability analysis at various flow temperature conditions, we are trying to find out the temperature effect on the secondary instability mechanism of Mack mode disturbances. The results point out that the fundamental mode always dominates the breakdown type when the saturated amplitude of the primary Mack mode is large enough. As the stagnation temperature increases, the maximum growth-rates of the fundamental mode and subharmonic mode both increase. Meanwhile, when the wall is cooling, the maximum growth-rates of the fundamental mode and subharmonic mode are both enlarged. In contrast, with the heating wall, the maximum growth-rates of the secondary instability both decrease. A Highly Efficient Reduced-Order Extrapolating Model for the 2D Viscoelastic Wave Equation Fei Teng & Zhendong Luo We mainly research the reduced-order of the classical natural boundary element (CNBE) method for the two-dimensional (2D) viscoelastic wave equation by means of proper orthogonal decomposition (POD) technique. For this purpose, we firstly establish the CNBE model and analyze the existence, stability, and errors for the CNBE solutions. We then build a highly efficient reduced-order extrapolating natural boundary element (HEROENBE) mode including few degrees of freedom but possessing sufficiently high accuracy for the 2D viscoelastic wave equation by the POD method and analyze the existence, stability, and errors of the HEROENBE solutions by the CNBE method. We finally employ some numerical experiments to verify that the numerical results are accorded with the theoretical ones so that the validity for the HEROENBE model is further verified. Adaptive Relaxation Strategy on Basic Iterative Methods for Solving Linear Systems with Single and Multiple Right-Hand Sides Yuan Yuan, Shuli Sun, Pu Chen & Mingwu Yuan Two adaptive techniques for choosing relaxation factor, namely, Minimal Residual Relaxation (MRR) and Orthogonal Projection Relaxation (OPR), on basic iterative methods for solving linear systems are proposed. Unlike classic relaxation, in which the optimal relaxation factor is generally difficult to find, in these proposed techniques, non-stationary relaxation factor based on minimal residual or orthogonal projection method is calculated adaptively in each relaxation step with acceptable cost for Jacobi, Gauss-Seidel or symmetric Gauss-Seidel iterative methods. In order to avoid the "stagnation" of the successive locally optimal relaxations, a recipe of inserting several basic iterations between every two adjacent relaxations is suggested and the resulting MRR$(m)$/OPR$(m)$ strategy is more stable and efficient (here $m$ denotes the number of basic iterations inserted). To solve linear systems with multiple right-hand sides efficiently, block-form relaxation strategies are proposed based on the MRR$(m)$ and OPR$(m)$. Numerical experiments show that the presented MRR$(m)$/OPR$(m)$ algorithm is more robust and effective than classic relaxation methods. It is also showed that the proposed block relaxation strategies can efficiently accelerate the solution of systems with multiple right-hand sides in terms of total solution time as well as number of iterations. An Implicit-Explicit Scheme for the Radiation Hydrodynamics Yaoli Fang In this paper, we study an implicit-explicit scheme for the radiation hydrodynamics in the equilibrium diffusion limit and in the grey nonequilibrium diffusion limit. We extend a popular Godunov-type method, the MUSCL-Hancock scheme, to the convective part of the radiation hydrodynamics, while a cell centered finite volume scheme is used for the radiative heat transfer. Moreover, the implicit-explicit scheme is easier to implement. Numerical simulations show the character of the radiative shock wave and the accuracy of the scheme. An Improved Single-Relaxation-Time Multiphase Lattice Boltzmann Model for Multiphase Flows with Large Density Ratios and High Reynolds Numbers Qiaozhong Li, Xiaodong Niu, Zhiliang Lu, You Li, Adnan Khan & Zishu Yu In this study, an improved single-relaxation-time multiphase lattice Boltzmann method (SRT-MLBM) is developed for simulating multiphase flows with both large density ratios and high Reynolds numbers. This model employs two distribution functions in lattice Boltzmann equation (LBE), with one tracking the interface between different fluids and the other calculating hydrodynamic properties. In the interface distribution function, a time derivative term is introduced to recover the Cahn-Hilliard equation. For flow field, a modified equilibrium particle distribution function is present to evolve the velocity and pressure field. The present method keeps simplicity of the conventional SRT-MLBM but enjoys good stability property in simulating multiphase. Apart from several benchmarks, the present model is validated by simulating various challenging multiphase flows, including two droplets impact on liquid film, droplet oblique splashing on a thin film and a drop impact on a moving liquid film. Numerical results show the reliability of present model for effectively simulating complex multiphase flows at density ratios of 1000 and high Reynolds numbers (up to 7000). Upwind Strategy for Localized Method of Approximate Particular Solutions with Applications to Convection Dominated Diffusion Problems Xueying Zhang & Luyu Ran A novel upwind technique for the localized method of approximate particular solutions (LMAPS) is proposed to solve the convection-diffusion equations. An upwind approximation to the convective terms is implemented by choosing upwind interpolation stencils while the central interpolation stencils are used for the diffusive terms. The proposed upwind LMAPS scheme is also been compared with conventional LMAPS without upwind technique, to demonstrate its superiority in generating high accurate solutions than the latter. Numerical results show that the proposed upwind LMAPS has high accuracy and efficiency for a variety of convection-diffusion equations. Effect of Leading-Edge Curvature on Receptivity of Stationary Cross-Flow Modes in Swept-Plate Boundary Layers Luyu Shen & Changgen Lu The prediction and control of the laminar-turbulent transition in three-dimensional boundary layers are crucial for the designs of vehicles, aircrafts, etc. Receptivity, the initial stage of transition, is the key to implement the prediction and control of the transition process. Former experimental results showed that, under a relatively low-level turbulence, the three-dimensional boundary-layer transition is mainly induced by the stationary cross-flow modes rather than the travelling ones. Near the leading edge, stationary cross-flow modes can be excited by three-dimensional localized roughness. And the receptive process is affected by both the size of roughness and the shape of leading edge which distorts the mean flow over the plate. Therefore, we perform direct numerical simulation to investigate the excitation of stationary cross-flow modes by three-dimensional localized wall roughness in swept-plate boundary layers with various elliptic leading edges. The effect of the leading-edge curvature on the induced stationary cross-flow modes is revealed. And the relations of the leading-edge curvatures with the amplitudes and dispersion relations of the stationary cross-flow modes are determined. Furthermore, the correlations between the receptivity coefficients and the geometries, locations and numbers of the roughness respectively are analyzed in different leading-edge curvatures. This research aims to complement the study of receptivity in the cross-flow boundary layer. Modified Two-Grid Algorithm for Nonlinear Power-Law Conductivity in Maxwell's Problems with High Accuracy Changhui Yao & Yanfei Li In this paper, we develop the superconvergence analysis of two-grid algorithm by Crank-Nicolson finite element discrete scheme with the lowest Nédélec element for nonlinear power-law conductivity in Maxwell's problems. Our main contribution will have two parts. On the one hand, in order to overcome the difficulty of misconvergence of classical two-grid method by the lowest Nédélec element, we employ the Newton-type Taylor expansion at the superconvergent solutions for the nonlinear terms on coarse mesh, which is different from the numerical solution on the coarse mesh classically. On the other hand, we push the two-grid solution to high accuracy by the postprocessing interpolation technique. Such a design can improve the computational accuracy in space and decrease time consumption simultaneously. Based on this design, we can obtain the convergent rate $\mathcal{O}(\Delta t^2+h^2+H^{\frac{5}{2}})$ in three-dimension space, which means that the space mesh size satisfies $h=\mathcal{O}(H^\frac{5}{4})$. We also present two examples to verify our theorem.
CommonCrawl
About AEM PHYSIOLOGY AND BIOTECHNOLOGY The Bacterivorous Soil Flagellate Heteromita globosa Reduces Bacterial Clogging under Denitrifying Conditions in Sand-Filled Aquifer Columns Richard G. Mattison, Hironori Taki, Shigeaki Harayama Richard G. Mattison Marine Biotechnology Institute Co., Ltd., Kamaishi Laboratories, Kamaishi City, Iwate 026-0001, Japan For correspondence: [email protected] Hironori Taki Shigeaki Harayama DOI: 10.1128/AEM.68.9.4539-4545.2002 An exopolymer (slime)-producing soil bacterium Pseudomonas sp. (strain PS+) rapidly clogged sand-filled columns supplied with air-saturated artificial groundwater containing glucose (500 mg liter−1) as a sole carbon source and nitrate (300 mg liter−1) as an alternative electron acceptor. After 80 days of operation under denitrifying conditions, the effective porosity and saturated hydraulic conductivity (permeability) of sand in these columns had fallen by 2.5- and 26-fold, respectively. Bacterial biofilms appeared to induce clogging by occluding pore spaces with secreted exopolymer, although there may also have been a contribution from biogas generated during denitrification. The bacterivorous soil flagellate Heteromita globosa minimized reductions in effective porosity (1.6-fold) and permeability (13-fold), presumably due to grazing control of biofilms. Grazing may have limited growth of bacterial biomass and hence the rate of exopolymer and biogas secretion into pore spaces. Evidence for reduction in biogas production is suggested by increased nitrite efflux from columns containing flagellates, without a concomitant increase in nitrate consumption. There was no evidence that flagellates could improve flow conditions if added once clogging had occurred (60 days). Presumably, bacterial biofilms and their secretions were well established at that time. Nevertheless, this study provides evidence that bacterivorous flagellates may play a positive role in maintaining permeability in aquifers undergoing remediation treatments. Bacterial growth in natural porous media frequently leads to clogging through a combination of factors involving biomass accumulation, exopolymeric slime secretion, and insoluble biogas formation. Many operations, including wastewater disposal, microbe-enhanced oil recovery, groundwater recharge, and in situ bioremediation are variously affected by this process (for a review, see reference 2). Since groundwater aquifers provide a significant proportion of the world population with a potable supply of water, their contamination with organic pollutants poses a serious risk both to health and the environment. At present, in situ bioremediation is considered to be the most cost-effective and least invasive strategy available to remediate an organically contaminated aquifer (12). This approach relies on maintaining good hydraulic conductivity (permeability) in the saturated subsurface to permit adequate groundwater flow through the affected area. Nutrients and/or oxidants may then be injected upstream of the contaminant to biostimulate the biodegradative abilities of the resident microorganisms. However, these water injection wells are frequent foci for partial clog formation in the subsurface (17). Consequently, bacterial clogging may have an adverse effect on the rate and extent of in situ bioremediation owing to reduced permeability in the aquifer. Bacterivorous protozoa are primary grazers on bacteria in numerous environments, and comparatively large populations coexist with bacteria in variously contaminated aquifers (8, 14, 25, 36). The impact of protozoa on in situ bioremediation is presently unknown but may be influenced by their ability to selectively graze on and control the biomass of the bacterial community (16, 24). Grazing protozoa are known to remineralize growth-limiting nutrients (for a review, see reference 21), which may directly stimulate bacterial metabolism and hence biodegradation. Furthermore, it is possible that grazing protozoa may indirectly stimulate the rate of in situ bioremediation by controlling bacterial clogging and therefore improving permeability (25). Soil acanthamoebae have already been shown to have a positive short-term effect on permeability in laboratory sand-filled columns undergoing bacterial clogging (5). Alternatively, it has been suggested that grazing protozoa may exert an adverse effect in contaminated aquifers by critically reducing the biomass of bacteria available for biodegradation (13). The purpose of this study was to investigate the impact of grazing by the common soil flagellate Heteromita globosa on the development and hydraulic properties of a clog formed during rapid growth of a slime-secreting Pseudomonas sp. (strain PS+). A model was developed in which small sand-filled columns inoculated with these organisms were perfused with an artificial groundwater medium (AGW) containing glucose as a sole carbon source and nitrate as an oxidant (C/N ratio = 4.0). Columns were allowed to develop denitrifying redox conditions typical of many organically contaminated aquifers undergoing remediation. It is anticipated that the findings could provide an impetus to further studies addressing interactions between bacteria and protozoa in contaminated aquifers and perhaps information to assist in the enhancement of existing strategies for in situ bioremediation. Organisms and media.Pseudomonas sp. strain PS+ was isolated from a diesel-contaminated aquifer near Studen, Switzerland. This strain was selected for use based on its abilities to produce exopolymeric slime, perform denitrification, and provide a food source for soil flagellates. Strain PS+ was deposited with the German Collection of Microorganisms and Cell Cultures (Braunschweig, Germany) (DSM 12877). The soil flagellate H. globosa was isolated from a petroleum hydrocarbon-contaminated aquifer at an abandoned oil refinery near Hünxe, Germany (36). An enrichment culture was prepared by suspending sediment in soil extract-salts medium (19) with strain PS+ at 25°C. Single flagellates were isolated from dilutions of the culture using a micromanipulator and refed with strain PS+. The isolation procedure was repeated (three times) to ensure purity of the isolate. H. globosa was deposited with the American Type Culture Collection (Manassas, Va.) (ATCC 50780). Sand-filled columns were perfused with AGW containing (in milligrams liter−1): NaHCO3 (500), NaNO3 (412), CaCl2·2H2O (40), MgSO4·7H2O (30), KH2PO4 (15), H3BO3 (2.86), MnCl2·4H2O (1.81), CoCl2·6H2O (0.04), CuSO4·5H2O (0.04), Na2MoO4·2H2O (0.03), and ZnCl2 (0.02), and 1 N HCl (1.25 ml liter−1). After autoclaving, CaCl2·2H2O and KH2PO4 were added separately through a sterile luer lock filter (0.2-μm pore size; Sartorius AG, Göttingen, Germany) and the pH was readjusted to 7.4. Glucose (500 mg liter−1) was added to the medium as a sole carbon source (C/N ratio = 4.0) on which bacteria but not flagellates were able to grow. All chemicals were of the highest purity (>99%) and were supplied by Wako Pure Chemical Industries Ltd., Tokyo, Japan, unless otherwise stated. All aqueous solutions were made using MilliQ pure water (Millipore, Tokyo, Japan) unless otherwise stated. Column apparatus.The arrangement used is shown in Fig. 1. All construction materials were supplied by Iuchi (Tokyo, Japan) unless otherwise stated. Steel columns (height, 10 cm; inside diameter, 4 cm) were each perforated with three side ports through which stainless steel needles (gauge 16) were inserted and secured with adjustable compression fittings (Naruse, Tokyo, Japan). Needles with multiple perforations were used to ensure efficient water and pressure sampling across each column. Columns were sealed at both ends using rubber stoppers having nonperforated needles preinserted to form top and base ports. Twelve columns were mounted upright and secured between two circular plates forming a torus capable of rotating on a spindle around a fixed central axis. Sterile AGW was supplied to the base port, and sterile AGW containing glucose was supplied to one side port of each column from separate reservoir canisters (20 liters) via microtube peristaltic pumps (Eyela, Tokyo, Japan). Negative control columns were supplied with sterile AGW from a separate reservoir canister to minimize risk from contamination. Some 24 flow lines supplying 12 columns were each connected to separate pump channels fitted with Tygon tubing (inside diameter, 1.59 mm). All connections to columns and pumps were made using PTFE tubing (inside diameter, 1.59 mm) and assorted polypropylene luer lock fittings, unless otherwise stated. Sterile luer lock filters (0.2-μm pore size) were inserted between column inlet ports and flow lines in order to minimize bacterial growth and contamination of reservoir canisters upstream. All filters were replaced regularly, and flow lines were flushed with hydrogen peroxide (6%) to sterilize. Effluent emerging from the upper port of each column could be sampled or diverted to a waste canister via a three-way luer-fitting stopcock (Sigma, Tokyo, Japan) attached to the stopper needle. The two remaining side ports on each column (located on either side of the nutrient inlet) were each connected via a three-way luer-fitting stopcock to glass tubes (length, 100 cm; inside diameter, 4.0 cm) using low-gas-permeability Viton tubing. Glass tubes functioned as piezometers for measuring hydraulic head and were oriented vertically and secured to the spindle adjacent to a ruled scale. Stopcocks were opened to allow piezometers to equilibrate several hours prior to reading. Fine quartz sand with a grain size of 80 to 150 μm (Koso Chemicals, Tokyo, Japan) was treated with sodium acetate (1 M, pH 5.0) and hydrogen peroxide (6%) to remove adsorbed carbonates and organics, respectively. Treated sand was rinsed thoroughly with deionized water and oven dried. Sand was then autoclaved and redried before use. Columns were packed with dry sand (120 g) using an unaligned steel mesh and mild sonication (100 W) to randomly disperse grains and were then flushed with CO2 and sealed. Displacing resident air with more soluble CO2 minimized the initial impact of gas bubbles on the effective porosity (θm) of the sand. Columns were resterilized in situ by perfusion with sodium azide (0.05%, 8 h) and were then flushed with AGW prior to inoculation with organisms. Packed sand was estimated to have a pore diameter range of 12 to 23 μm based on the assumption of cylindrical pore spaces. Photograph showing sand-filled column apparatus. Steel columns (Cs) were oriented vertically around a central axis and secured between two plates (Pw). AGW was supplied through the base (not shown) and side port (Ps) on each column, while effluent was diverted to a waste pipe (Wp) or sampled via a stopcock (Se) mounted on the upper port. Luer lock fittings (Fl) for the inline attachment of filters with side ports (Ps) are also shown. Saturated hydraulic conductivity (sand permeability) was measured as a function of water height in glass piezometer tubes (Tp) that were operated through stopcocks connected to side ports (Pp). Scale bar = 5 cm. Experimental design.Columns were divided into four groups (A to D) with three replicates in each. Groups A and C were designated positive controls for clogging and were inoculated with strain PS+ at 0 days. Group B replicates were designated treatments and were inoculated with strain PS+ and H. globosa at 0 days. This group was used to determine the impact of grazing flagellates on clogging. Group D replicates were designated abiotic negative controls and were neither inoculated with organisms nor supplied with glucose. After 60 days at a Ks/Ks0 of 0.1 (detailed below), group C members were inoculated with H. globosa as described previously. This group was used to determine if a formed clog could be reversed by grazing flagellates. Appropriate columns were inoculated with strain PS+ (108 cells ml−1) and H. globosa (105 cells ml−1) in AGW equivalent to 2 pore volumes. Pumping was stopped overnight to facilitate establishment of organisms and was again resumed at day 0. Continuous flow was maintained through columns for 80 days at an average initial flow volume of 12.0 ± 0.3 ml h−1. The apparatus was maintained at an ambient temperature of 21 ± 0.3°C during this period. Breakthrough curves.Transport properties in sand were measured for each column at the beginning and end of the experiment. A pulse of NaBr (10 mM, 2 min) in MilliQ water was injected in the flow line to the base port of each column while flow to the side port was stopped. Effluent samples were collected at regular time intervals equivalent to a total of 3 pore volumes, and the flow volume (Q, in milliliters hour−1) was determined by weight. Bromide concentrations were measured using a TOA Ion Analyzer (IA-100) equipped with an auto sample injector (TOA Electronics Ltd., Tokyo, Japan). Data were used to solve the advection-dispersion equation for coefficients of partitioning (β) and longitudinal dispersion (D) using the program CXTFIT 2.1 (31). β was used to determine the effective porosity of water-saturated sand in the equation θm = β·θ, where θ is the total porosity (0.43 ± 0.002), determined gravimetrically. D was used to determine the Peclet number (Pe) in the equation Pe = v·x/D, where x is the height of the sand column (7 cm), v (equivalent to Q/A·θ) is the average pore water velocity (in centimeters hour−1), and A is the cross-sectional area of the column (11.95 cm2). Pe is a dimensionless ratio that relates the effectiveness of mass transport by advection to that by dispersion or diffusion (6). Saturated hydraulic conductivity (permeability).Darcy's Law (equation 1) was used to determine changes in the permeability (Ks, in centimeters hour−1) in each column as follows: $$mathtex$$\[K_{S}\ {=}\ \frac{Q\ {\cdot}\ L}{A\ {\cdot}\ {\partial}H_{L}}\]$$mathtex$$(1) where L is the vertical distance between piezometers (5 cm) and ∂HL is the change in hydraulic head between piezometers (in centimeters). The ratio Ks/Ks0 expressing differences in the permeability between time t and 0 days was used as an indicator of clogging. Analytical methods.Dissolved oxygen (DO) was measured using an azide modification of the Winkler technique (28). Unfiltered reservoir canister and effluent samples (2 ml) were collected in a nitrogen-filled, gastight syringe (5 ml; SGE International Pty. Ltd., Ringwood, Australia). The syringe was sealed with a rubber septum through which 15 μl each of two solutions (A and B) was injected into the sample followed by immediate mixing. Solution A contained (in grams per 10 ml) MnCl2·4H2O (8.0), and solution B contained (in grams per 10 ml) NaOH (3.6), KI (2.0), and NaN3 (0.05). Phosphoric acid (85%; 50 μl) was subsequently added with immediate mixing. Soluble starch solution (1%; 20 μl) was added to the sample and titrated to colorlessness against freshly prepared Na2S2O3 (0.5 mM). The detection sensitivity for the method was approximately 40 μg of DO liter−1. Anions (Br−, NO2−, NO3−, PO42−, and SO42−) were measured in a reservoir canister sterilized by filtration (0.2-μm pore size) and effluent samples (1 ml) using a TOA Ion Analyzer as described previously. The detection sensitivity was 0.1 mg of anion liter−1. Glucose was measured spectrophotometrically using a chromogen reagent containing glucose oxidase (EC 1.1.3.4; 3,600 U), peroxidase (EC 1.11.1.7; 250 U), 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt (ABTS; 165 mg), and NaN3 (3.0 mg) in phosphate buffer (0.1 M; 300 ml; pH 7.0). Phosphate buffer contained NaH2PO4·2H2O (15.6 g liter−1). Samples (60 μl) of effluent, standards, or from reservoir canisters were sterilized by filtration (0.2-μm pore size) and added to chromogen reagent (3.0 ml). After incubation (40 to 50°C; 30 min), the absorbance at 415 nm was measured using a HACH DR/2000 spectrophotometer (HACH Co., Loveland, Colo.). Glucose standard contained d-glucose (200 mg liter−1) and benzoic acid (400 mg liter−1) as a preservative. The detection sensitivity was approximately 120 μg of glucose liter−1. Cell enumeration.Samples (0.5 ml) of unfiltered effluent were briefly sonicated (100 W; 30 s) and were fixed with an equal volume of glutaraldehyde (1%) in phosphate buffer (pH 7.0). Phosphate buffer contained (in grams liter−1) NaCl (0.12), MgSO4·7H2O (0.04), CaCl2·6H2O (0.04), Na2HPO4 (1.42), and KH2PO4 (1.36). Fixed samples were enumerated in a Neubauer chamber (Sigma) examined at a magnification of ×320 under interference-contrast microscopy (Nikon Optiphot; Nikon, Tokyo, Japan). Samples were diluted with phosphate buffer as appropriate, and bacterial counts were averaged for 20 ruled units (volume, 2.5 × 10−7 ml each). Scanning electron microscopy (SEM).Sand cores were collected in copper cylinders (diameter, 0.8 by 1.0 cm) and were capped at both ends with a nylon mesh (100-μm pore size) secured with silicon tubing. Cores were prefixed (2 h, 4°C) with freshly prepared glutaraldehyde (2%) in sodium cacodylate buffer (0.05 M; pH 7.0), containing l-lysine (0.05 M) to stabilize exopolymers. Main fixation with glutaraldehyde in sodium cacodylate buffer (8 h, 4°C) was followed by washing using the same buffer (8 h). Postfixation with osmium tetroxide (1%) in cacodylate buffer (2 h, 4°C) was followed by washing using the same buffer (2 h) and dehydration to 95% ethanol. Cores were dehydrated in absolute ethanol and were critical point dried with CO2 in a Hitachi HCP-2 (Hitachi, Tokyo, Japan). Sand was carefully extruded from each cylinder and was then mounted on aluminum stubs and sputtered with platinum using a Hitachi E102 Ion sputter coater. Stubs were examined using a Hitachi S-2500 scanning electron microscope operating at 15 kV. Statistical analysis.Data were transformed as log10 (X + 1) prior to analysis and were compared using one-way analysis of variance, linear regression, and Student's t test (26). Values are presented as sample means with standard errors unless otherwise stated. Physical evidence of bioclogging.Sand removed from all columns except abiotic negative controls (group D) showed direct evidence of deposition of exopolymeric slime on the surface of grains observed under SEM (Fig. 2A). Amorphous condensates of exopolymer were randomly distributed and appeared anchored to the surface of grains by a network of fine filaments (diameter, 40 ± 5 nm). These filaments encompassed and appeared to originate from microcolonies of rod-shaped bacteria (0.7 ± 0.04 by 0.3 ± 0.01 μm) residing on the surface (Fig. 2B). Slime secretion was first noted in the column effluent after 20 days. SEM micrographs showing exopolymer of Pseudomonas sp. (strain PS+) apparently attached to sand grains (A) and bacteria enmeshed in a filamentous network (B). A cyst of H. globosa (Fc) is also shown. Scale bars = 1 μm. A comparison of data for θm indicated significant differences over time and between groups of columns (Table 1). All columns except the abiotic negative controls showed significantly reduced values for θm compared with corresponding values at 0 days (P < 0.002). Columns containing only bacteria (group A) also showed significantly reduced values (P < 0.05) for θm (2.5-fold) in comparison with those containing both bacteria and flagellates (group B; 1.6-fold). Similarly, data for Pe indicated significant differences over time and between groups of columns. Columns containing only bacteria showed significantly reduced values for Pe in comparison with abiotic negative controls (P < 0.05) and also with values at 0 days (P < 0.05). However, Pe remained above 6 in all cases. Determining θm and Pe in sand-filled columns using the program CXTFIT 2.1 (31) to model bromide breakthrough dataa Sand permeability was significantly different between (but not within) all groups of columns in which it also decreased with time (P < 0.0001) (Fig. 3). Decreasing permeability was marked by oscillations in magnitude at periodic intervals. The largest sustained reduction in permeability (26-fold) occurred in columns containing only bacteria (group A) and was significantly lower (P < 0.01) than in those containing bacteria and flagellates (group B; 13-fold). The permeability in columns containing only bacteria was similar to that measured in group C (P > 0.9) and remained unchanged despite the addition of flagellates to the latter after 60 days (P > 0.7). Permeability reduction in abiotic negative controls was less than twofold. Changes in sand permeability with respect to time (in days) for groups A (flagellates absent; closed circles), B (flagellates present; open circles), C (flagellates added after 60 days; open squares), and D (abiotic control; closed squares). Data are shown as mean ± standard error of three replicates. Biological populations in columns.Bacterial numbers measured in the effluent from columns containing only bacteria and those containing both bacteria and flagellates were similar (P > 0.1) and generally increased with time. Superimposed on this increase were fluctuations in numbers that also appeared to increase in amplitude with time. Similar observations were made on numbers of flagellates exiting group B, though no relationship was found with corresponding numbers of bacteria (P < 0.0001). Variation within groups of columns was not found to be significant (P, >0.05 to 0.9) and probably had minimal impact on this effect. Interestingly, numbers of bacteria exiting these columns (Fig. 4) were inversely related with permeability (P < 0.001), whereas no relationship was found with numbers of flagellates (P > 0.1). Furthermore, numbers of bacteria exiting columns in groups A and C were not significantly different even after the addition of flagellates to the latter (P > 0.05). Bacterial aggregates (up to 500 μm in diameter) were observed in the effluent emerging from each column, whereas elongated bacterial cells (up to 12 μm in length) emerged only from columns containing both bacteria and flagellates. Relationship between sand permeability and numbers of bacteria in effluent from group B. The solid line shows the linear regression of the log10 data (r2 = 0.236, n = 63, P < 0.001). Bacterial numbers in sediment were also not significantly different for columns containing only bacteria and those containing both bacteria and flagellates (6.2 × 109 ± 0.5 × 109 per g [dry weight]). However, numbers within columns were reduced below the glucose inlet (2.0 × 109 ± 1.0 × 109 per g [dry weight]). Interestingly, flagellates (group B) were more numerous in sediment below the glucose inlet (1.0 × 107 ± 0.6 × 107 per g [dry weight]) than they were above glucose (2.1 × 106 ± 0.4 × 106 per g [dry weight]). Substrate utilization.Glucose measured in the effluent from columns containing only bacteria and from those containing both bacteria and flagellates was significantly depleted from 500 mg liter−1 (reservoir) to approximately 1.0 to 100 mg liter−1 after 10 days. Similarly, DO exiting these columns was also rapidly depleted from 7.0 ± 0.1 mg liter−1 (reservoir) to less than 1.0 mg liter−1 after 8 days (Fig. 5). Nitrate exiting these columns was also significantly depleted from the reservoir concentration. However, no significant differences were found either within or between columns containing only bacteria and those containing both bacteria and flagellates with respect to glucose, DO, or nitrate emerging in the effluent (P, >0.4 to 0.9). Interestingly, nitrite emerging from columns containing both bacteria and flagellates (18.2 ± 2.3 mg liter−1) was significantly higher (P < 0.001) than from those containing only bacteria (7.2 ± 1.6 mg liter−1) (Fig. 6). Again, no significant variation within groups of columns was found (P, >0.05 to 0.3). Effluent concentrations of DO from groups A (flagellates absent; closed circles), B (flagellates present; open circles), and D (abiotic control; closed squares) with respect to time (in days). Data are shown as mean ± standard error of three replicates. Effluent concentrations of nitrite from groups A (flagellates absent; closed symbols) and B (flagellates present; open symbols) with respect to time (in days). Data are shown as mean ± standard error of three replicates. Saturated hydraulic conductivity (permeability) measures the ability of a porous matrix to conduct water. While permeability is determined by porosity of the matrix, the latter may be influenced by microorganisms residing in pore spaces. Bacteria may reduce porosity and lead to clogging through one or more factors, including biomass accumulation, secretion of exopolymers, alteration of the redox environment, and production of insoluble biogas (for a review, see reference 2). Permeability has proven difficult to measure under field conditions, so laboratory models have been used to examine the dynamics of clogging in various porous matrices under different redox conditions (5, 20, 23, 29, 30, 32-35). Physical conditions affecting clogging.Porosity is related to the volume and contiguity of adjoining pores, which are both directly affected by grain size. Despite smaller grain size, the porosity of fine sand used in this study was higher (0.43 ± 0.002) than that for medium-to-coarse aquifer sediment (0.31 ± 0.002) from which the flagellates were isolated (36). The neck diameter of contiguous pores in this fine sand was estimated to be two to four times the width of these flagellates and probably afforded them maximum mobility for grazing on bacteria. Anaerobic conditions have traditionally been considered necessary for inducing large reductions in permeability (for a review, see reference 2). However, porous matrices loaded with aerobic effluents often showed higher rates and magnitudes of clogging than did those charged with anaerobic effluents (4, 18, 20). Even so, rapid proliferation of bacteria at aerobic inlets often induced steep nutrient (33) and DO (23) gradients that precluded growth in these matrices. Alternatively, facultative anaerobic bacteria were found to produce more extensive clogging throughout the same matrices (23). Similar observations were made during this study, where a facultative denitrifying Pseudomonas sp. (strain PS+) proliferated and induced clogging throughout sand columns supplied with glucose and nitrate at non-growth-limiting concentrations. Typically the initial C/N ratio (4.0) was low in order to simulate conditions in the subsurface at sites undergoing wastewater disposal treatment or in situ bioremediation with nitrate injection. The maximum sustained permeability reduction (26-fold) due to column clogging lay within the range previously reported for aerobic-anaerobic conditions (18). Bacterial biomass accumulation and clogging.The surface area for potential colonization by bacteria is normally higher in a fine-grained matrix, although the observed pattern is often sparse and heterogeneous. Indeed, permeability fell by 3 orders of magnitude when a non-exopolymer-forming Arthrobacter sp. occupied only 8.5% of pore volume (33). It has been suggested that reductions in permeability occur either from a strategically located biofilm that alters the geometry of pore necks (3) or by accumulation of bacterial aggregates lodged in those pore necks (33, 34). Bacterial aggregates are frequently formed as a defensive strategy towards grazing by protozoa (5, 10) and were formed by strain PS+ in response to H. globosa in batch cultures. However, aggregates occurred in the presence and/or absence of flagellates in columns and may have resulted from exopolymer secretion rather than in response to grazing. Presumably, aggregates formed in response to grazing would adversely affect permeability (5). That this did not occur in these columns may be partly due to the predatory efficiency of flagellates as discussed later. However, filaments formed by strain PS+ in the presence of flagellates were considered a strategy to avoid predation (9). Bacterial exopolymer and biogas affect clogging.Bacteria may secrete exopolymers as an adherent capsule and/or loosely associated slime layers. Interestingly, exopolymer secretion by strain PS+ was not found in batch cultures and may have resulted from direct contact with sand as reported previously (35). Bacterial exopolymers are mostly hydrated polysaccharides with the ability to reduce water-conducting spaces and hence permeability in porous matrices (4, 20, 23, 27, 32, 33, 35). Indeed, exopolymer secretion was associated with reduction in the θm of a microbially fouled gravel pack (4). Similarly, a reduction in θm in sand columns colonized by strain PS+ was deduced from bromide breakthrough data (Table 1). Two possible mechanisms could account for bacterial clogging in these columns. Firstly, clogging could result from exopolymer secretion by bacterial biofilms residing in pore spaces. Progressive accumulation of exopolymer could reduce θm (for a review, see reference 2) without having a major impact on the Pe. Secondly, clogging could result from strategic lodging of bacterial aggregates and/or small insoluble biogas bubbles in the necks between contiguous pore spaces (22, 33, 34). Blockage of pore necks would not have a significant effect on θm but would instead create stagnant zones in the matrix (22). Such zones would cause a shift from advective to dispersive flow with a consequent reduction in Pe (to ≤6) according to Fetter (6). This second possibility was considered less likely since strain PS+ significantly reduced θm, while Pe remained above 6 in all cases. Permeability reductions in sand columns containing strain PS+ also showed periodic oscillations in magnitude. A similar phenomenon linked with bacterial plug development and propagation was interpreted using a three-stage pressure model (27). The model assumed three successive phases of exopolymer induction, plugging, and plug propagation. Exopolymer induction was associated with development of contiguous bacterial colonies throughout the matrix and had minimal impact on pore volume. Plugging occurred when exopolymers were secreted into pore spaces and coincided with a reduction in matrix permeability. Advective water flow was diverted from channels in the matrix to those within the biofilm in the region of plugging. Biofilms were more restrictive to flow and permeability was reduced further until their sloughing point was reached. Shearing forces responsible for biofilm sloughing opened new flow channels, allowing partial restoration of permeability to occur (27). Although these forces have largely been attributed to changes in flow velocity within pore spaces (3, 27), it is possible that sloughing was assisted by biogas production in columns. Fine bubbles may have become entrapped in the network of exopolymer secreted by underlying bacteria and weakened the cohesiveness of the biofilm (1, 11). This would probably reduce θm in pore spaces and facilitate biofilm sloughing at lesser shearing forces. Subsequent deposition of sloughed biofilm in pore necks led to further reductions in permeability until at maximum pressure these plugs were swept downstream (27). This propagation phase was associated with periodic oscillations in permeability and repeated plug breakthrough. Reductions in sand permeability were also associated with higher numbers of bacteria released in column effluents. Presumably, the greater shearing forces associated with lower permeability caused increased numbers of bacteria to be sloughed into pore spaces prior to plug breakthrough. Interestingly, numbers of flagellates in column effluents were independent of permeability and suggested that these motile protozoa may have more control over their position in the mobile water phase than does nonmotile strain PS+. Although biogas was formed as a product of denitrification, there was no evidence to indicate that large gas pockets deleterious to permeability (11, 22) were permanently trapped in these sand columns. Gas bubbles were naturally vented from sand under prevailing hydrostatic pressures, and piezometers were serviced for potential gas locks several hours before measurements were made. Flagellates reduce bacterial clogging.Bacterivorous protozoa (especially heterotrophic flagellates) coexist with bacteria thriving on organic contaminants in the subsurface (8, 14, 16, 25, 36). One important consequence of protozoan grazing is the remineralization of bacterial nutrients (for a review, see reference 21). Circumstantial evidence suggested that protozoa feeding on bacteria in a polycyclic aromatic hydrocarbon (PAH)-contaminated aquifer may have remineralized PAH-derived carbon (8). Indeed, evidence for the reincorporation of carbon from biodegraded [14C]toluene into biomass by grazing flagellates (maximum 5%) was reported in batch culture (15). Circumstantial evidence also suggested that protozoa may play a role in maintaining hydraulic conductivity in aquifers undergoing biotreatment of degradable organics (25). This hypothesis was examined in sand columns inoculated with bacteria (nonmotile, non-exopolymer-secreting Bacillus sp.) and acanthamoebae (5). Despite repeated inoculation of columns with acanthamoebae, these protozoa were only able to maintain favorable permeability over a short period. Subsequent examination of the sand matrix revealed that bacteria had migrated up a nutrient gradient (against downward water flow) and away from the acanthamoebae, which had subsequently encysted to survive adverse grazing conditions (5). Columns used in the present study were operated with an upward water flow to reduce preferential flow paths in the sand matrix and to discourage nutrient gradients against gravity. Furthermore, these columns were inoculated only once with flagellates that were smaller and more mobile than were amoebae and were perhaps more representative of aquifer protozoa (16, 36). Protozoa are regarded as being more sensitive than bacteria to lower concentrations of DO (25), and this could limit their role in aquifers where anaerobic degradation predominates. Interestingly, denitrifying conditions did not appear to adversely affect the spatial distribution of flagellates above the glucose inlet in columns during the present study. However, flagellate cysts tended to accumulate at the base of these columns as previously observed for acanthamoebae (5). Nevertheless, these flagellates maintained a significantly higher sand permeability (13-fold reduction over controls) than did the columns from which they were absent (26-fold reduction). Corroborative evidence was also provided by higher θm in columns containing flagellates. The main impact of flagellates on permeability may be in reducing the ability of bacteria to accumulate exopolymers and biogas in pore spaces rather than in grazing bacteria strategically blocking pore necks. In addition, flagellate grazing may directly form water-conducting channels in the developing biofilm. Supporting evidence showed that flagellates added after 60 days were unable to restore θm or sand permeability. Presumably, bacterial biofilms and their secretions were well established by this stage. Surprisingly, higher concentrations of nitrite were detected in the effluent from columns with flagellates than from those containing only bacteria. Indeed, certain protozoa have been reported to perform dissimilatory nitrate reduction to provide an additional source of electron acceptors under anoxic conditions (7). However, there was no evidence to suggest a corresponding increase in nitrate consumption in the presence of these flagellates. Nevertheless, the data could suggest that flagellates restrict the potential for biogas formation by strain PS+ in sand. The role and importance of soil flagellates in the processes of nitrate reduction and denitrification are presently being investigated in our laboratory. We thank A. E. Wheeler and Y. Inomata for valuable assistance in constructing and operating the column apparatus. We also thank the New Energy and Industrial Technology Development Organization (NEDO) "Bioconsortia project" and Time Machine Bio (TMB) for funding this study and also NEDO for providing a personal fellowship to R.G.M. Received 18 April 2002. Accepted 13 June 2002. Battersby, N. S., D. J. Stewart, and A. P. Sharma. 1985. Microbiological problems in the offshore oil and gas industries. J. Appl. Bacteriol. Symp. 1985(Suppl.):227S-235S. Baveye, P., P. Vandevivere, B. L. Hoyle, P. C. DeLeo, and D. S. de Lozada. 1998. Environmental impact and mechanisms of the biological clogging of saturated soils and aquifer materials. Crit. Rev. Environ. Sci. Technol. 28:123-191. OpenUrlCrossRefWeb of Science Characklis, W. G., A. B. Cunningham, A. Escher, and D. Crawford. 1987. Biofilms in porous media, p. 57-78. In D. R. Cullimore (ed.), Proceedings of the International Symposium on Biofouled Aquifers: Prevention and Restoration. American Water Resources Association technical publication series. American Water Resources Association, Bethesda, Md. Cullimore, D. R., and N. Mansuy. 1987. A screen arc model well to simulate iron bacterial biofouling. J. Microbiol. Methods 7:225-232. DeLeo, P. C., and P. Baveye. 1997. Factors affecting protozoan predation of bacteria clogging laboratory aquifer microcosms. Geomicrobiol. J. 14:127-149. Fetter, C. W. 1992. Contaminant hydrogeology. Macmillan Publishing Co., New York, N.Y. Finlay, B. J., A. S. W. Span, and J. M. P. Harman. 1983. Nitrate respiration in primitive eukaryotes. Nature 303:333-336. Ghiorse, W. C., J. B. Herrick, R. L. Sandoli, and E. L. Madsen. 1995. Natural selection of PAH-degrading bacterial guilds at coal-tar disposal sites. Environ. Health Perspect. 103:107-111. Hahn, M. W., E. R. B. Moore, and M. G. Höfle. 1999. Bacterial filament formation, a defense mechanism against flagellate grazing, is growth rate controlled in bacteria of different phyla. Appl. Environ. Microbiol. 65:25-35. Hahn, M. W., E. R. B. Moore, and M. G. Höfle. 2000. Role of microcolony formation in the protistan grazing defense of the aquatic bacterium Pseudomonas sp. MWH1. Microb. Ecol. 39:175-185. Harremöes, P., J. C. Jansen, and G. H. Kristensen. 1980. Practical problems related to nitrogen bubble formation in fixed film reactors. Prog. Water Technol. 12:253-269. Hart, S. 1996. In situ bioremediation: defining the limits. Environ. Sci. Technol. 30:398A-401A. Kota, S., R. C. Borden, and M. A. Barlaz. 1999. Influence of protozoan grazing on contaminant biodegradation. FEMS Microbiol. Ecol. 29:179-189. Madsen, E. L., J. L. Sinclair, and W. C. Ghiorse. 1991. In situ biodegradation: microbiological patterns in a contaminated aquifer. Science 252:830-833. Mattison, R. G., and S. Harayama. 2001. The predatory soil flagellate Heteromita globosa stimulates toluene biodegradation by a Pseudomonas sp. FEMS Microbiol. Lett. 194:39-45. Novarino, G., A. Warren, H. Butler, G. Lambourne, A. Boxshall, J. Bateman, N. E. Kinner, R. W. Harvey, R. A. Mosse, and B. Teltsch. 1997. Protistan communities in aquifers: a review. FEMS Microbiol. Rev. 20:261-275. Oberdorfer, J. A., and F. L. Peterson. 1985. Waste-water injection: geochemical and biogeochemical clogging processes. Ground Water 23:753-761. Okubo, T., and J. Matsumoto. 1983. Biological clogging of sand and changes of organic constituents during artificial recharge. Water Res. 17:813-821. Page, F. C. 1988. A new key to freshwater and soil gymnamoebae. Freshwater Biological Association, Ambleside, Cumbria, United Kingdom. Raiders, R. A., M. J. McInerney, D. E. Revus, H. M. Torbati, R. M. Knapp, and G. E. Jenneman. 1986. Selectivity and depth of microbial plugging in Berea sandstone cores. J. Ind. Microbiol. 1:195-203. Ratsak, C. H., K. A. Maarsen, and S. A. L. M. Kooijman. 1996. Effects of protozoa on carbon mineralization in activated sludge. Water Res. 30:1-12. Ronen, D., B. Berkowitz, and M. Magaritz. 1989. The development and influence of gas bubbles in phreatic aquifers under natural flow conditions. Transport Porous Med. 4:295-306. Shaw, J. C., B. Bramhill, N. C. Wardlaw, and J. W. Costerton. 1985. Bacterial fouling in a model core system. Appl. Environ. Microbiol. 49:693-701. Sherr, B. F., E. B. Sherr, and J. McDaniel. 1992. Effect of protistan grazing on the frequency of dividing cells in bacterioplankton assemblages. Appl. Environ. Microbiol. 58:2381-2385. Sinclair, J. L., D. H. Kampbell, M. L. Cook, and J. T. Wilson. 1993. Protozoa in subsurface sediments from sites contaminated with aviation gasoline or jet fuel. Appl. Environ. Microbiol. 59:467-472. Sokal, R. R., and F. J. Rohlf. 1981. Biometry: the principles and practice of statistics in biological research. W. H. Freeman, San Francisco, Calif. Stewart, T. L., and H. S. Fogler. 2001. Biomass plug development and propagation in porous media. Biotechnol. Bioeng. 72:353-363. Suter, M. 1995. Abschätzung des Einflusses eines Azoarcus-Stammes auf den Abbau von Dieselöl in Säulensystemen. Diplomarbeit thesis. Institute for Terrestrial Ecology, ETH, Zurich, Switzerland. Taylor, S. W., and P. R. Jaffé. 1990. Biofilm growth and the related changes in the physical properties of a porous medium. I. Experimental investigation. Water Resour. Res. 26:2153-2159. Torbati, H. M., R. A. Raiders, E. C. Donaldson, M. J. McInerney, G. E. Jenneman, and R. M. Knapp. 1986. Effect of microbial growth on pore entrance size distribution in sandstone cores. J. Ind. Microbiol. 1:227-234. Toride, N., F. J. Leij, and M. T. van Genuchten. 1999. The CXTFIT code for estimating transport parameters from laboratory or field tracer experiments. Version 2.1. Research report no. 137. U.S. Salinity Laboratory, U.S. Department of Agriculture, Riverside, Calif. Vandevivere, P., and P. Baveye. 1992. Effect of bacterial extracellular polymers on the saturated hydraulic conductivity of sand columns. Appl. Environ. Microbiol. 58:1690-1698. Vandevivere, P., and P. Baveye. 1992. Saturated hydraulic conductivity reduction caused by aerobic bacteria in sand columns. Soil Sci. Soc. Am. J. 56:1-13. Vandevivere, P., and P. Baveye. 1992. Relationship between transport of bacteria and their clogging efficiency in sand columns. Appl. Environ. Microbiol. 58:2523-2530. Vandevivere, P., and D. L. Kirchman. 1993. Attachment stimulates exopolysaccharide synthesis by a bacterium. Appl. Environ. Microbiol. 59:3280-3286. Zarda, B., G. Mattison, A. Hess, D. Hahn, P. Höhener, and J. Zeyer. 1998. Analysis of bacterial and protozoan communities in an aquifer contaminated with monoaromatic hydrocarbons. FEMS Microbiol. Ecol. 27:141-152. Applied and Environmental Microbiology Sep 2002, 68 (9) 4539-4545; DOI: 10.1128/AEM.68.9.4539-4545.2002 Thank you for sharing this Applied and Environmental Microbiology article. You are going to email the following The Bacterivorous Soil Flagellate Heteromita globosa Reduces Bacterial Clogging under Denitrifying Conditions in Sand-Filled Aquifer Columns Message Subject (Your Name) has forwarded a page to you from Applied and Environmental Microbiology Message Body (Your Name) thought you would be interested in this article in Applied and Environmental Microbiology. Follow #AppEnvMicro
CommonCrawl
A directional uniformity of periodic point distribution and mixing Towards the Chern-Simons-Higgs equation with finite energy November 2011, 30(4): 1161-1180. doi: 10.3934/dcds.2011.30.1161 Some remarks for a modified periodic Camassa-Holm system Guangying Lv 1, and Mingxin Wang 2, Department of Mathematics, Southeast University, Nanjing 210018 Natural Science Research Center, Harbin Institute of Technology, Harbin 150080, China Received May 2010 Revised August 2010 Published May 2011 This paper is concerned with a modified two-component periodic Camassa-Holm system. The local well-posedness and low regularity result of solution are established by using the techniques of pseudoparabolic regularization and some priori estimates derived from the equation itself. A wave-breaking for strong solutions and several results of blow-up solution with certain initial profiles are described. In addition, the initial boundary value problem for a modified two-component periodic Camassa-Holm system is also considered. Keywords: Wave breaking., Periodic, Two-component Camassa-Holm system, Blow-up, Local well posedness. Mathematics Subject Classification: Primary: 35G25; Secondary: 35B30, 35L0. Citation: Guangying Lv, Mingxin Wang. Some remarks for a modified periodic Camassa-Holm system. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1161-1180. doi: 10.3934/dcds.2011.30.1161 J. Bona and R. Smith, The initial value problem for the Korteweg-de Vries equation,, Phil. Trans. Roy. Soc. London A, 278 (1975), 555. doi: 10.1098/rsta.1975.0035. Google Scholar A. Bressan and A. Constantin, Global conservation solutions of the Camassa-Holm equation,, Arch. Rat. Mech. Anal., 183 (2007), 215. doi: 10.1007/s00205-006-0010-z. Google Scholar A. Bressan and A. Constantin, Global dissipative solutions of the Camassa-Holm equation,, Anal. Appl., 5 (2007), 1. doi: 10.1142/S0219530507000857. Google Scholar R. Cammasa and D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Lett., 71 (1993), 1661. doi: 10.1103/PhysRevLett.71.1661. Google Scholar M. Chen, S. Liu and Y. Zhang, A 2-component generalization of the Camassa-Holm equation and its solutions,, Lett. Math. Phys., 75 (2006), 1. doi: 10.1007/s11005-005-0041-7. Google Scholar A. Constantin, The hamiltonian structure of the Camassa-Holm equation,, Expo. Math., 15 (1997), 53. Google Scholar A. Constantin, On the inverse spectral problem for the Camasa-Holm equation,, J. Funct. Anal., 155 (1998), 352. doi: 10.1006/jfan.1997.3231. Google Scholar A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach,, Ann. Inst. Fourier., 50 (2000), 321. Google Scholar A. Constantin, On the scattering problem for the Camassa-Holm equation,, Proc. R. Soc. London A, 457 (2001), 953. doi: 10.1098/rspa.2000.0701. Google Scholar A. Constantin, The trajectories of particles in Stokes waves,, Invent. Math., 166 (2006), 523. doi: 10.1007/s00222-006-0002-5. Google Scholar A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations,, Arch. Rat. Mech. Anal., 1992 (2009), 165. doi: 10.1007/s00205-008-0128-2. Google Scholar A. Constantin and J. Escher, Well-posedness, global existence and blowup phenomena for a periodic quasi-linear hyperbolic equation,, Comm. Pure Appl. Math., 51 (1998), 475. doi: 10.1002/(SICI)1097-0312(199805)51:5<475::AID-CPA2>3.0.CO;2-5. Google Scholar A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations,, Acta Math., (1998), 229. doi: 10.1007/BF02392586. Google Scholar A. Constantin and J. Escher, Global weak solutions for a shallow water equation,, Indiana Univ. Math. J., 47 (1998), 1527. doi: 10.1512/iumj.1998.47.1466. Google Scholar A. Constantin and J. Escher, On the blow-up rate and the blow-up of breaking waves for a shallow water equation,, Math. Z., 233 (2000), 75. doi: 10.1007/PL00004793. Google Scholar A. Constantin and J. Escher, Particle trajectories in solitary water waves,, Bull. Amer. Math. Soc., 44 (2007), 423. doi: 10.1090/S0273-0979-07-01159-7. Google Scholar A. Constantin, V. Gerdjikov and R. I. Ivanov, Inverse scattering transform for the Camassa-Holm equation,, Inverse Problems, 22 (2006), 2197. doi: 10.1088/0266-5611/22/6/017. Google Scholar A. Constantin and R. Ivanov, On an integrable two-component Camassa-Holm shallow water system,, Phys. Lett. A., 372 (2008), 7129. doi: 10.1016/j.physleta.2008.10.050. Google Scholar A. Constantin and R. S. Johnson, Propagation of very long water waves, with vorticity over variable depth, with applicastions to tsuami,, Fluid Dynam. Res., 40 (2008), 175. doi: 10.1016/j.fluiddyn.2007.06.004. Google Scholar A. Constantin, T. Kappeler, B. Kolev and P. Topalov, On geodesic exponential maps of the Virasoro group,, Ann. Global Anal. Geom., 31 (2007), 155. doi: 10.1007/s10455-006-9042-8. Google Scholar A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equation,, Arch. Ration. Mech. Anal., 192 (2009), 165. doi: 10.1007/s00205-008-0128-2. Google Scholar A. Constantin and B. Kolev, Geodesic flow on the diffeomorphism group of the circle,, Comment. Math. Helv., 78 (2003), 787. doi: 10.1007/s00014-003-0785-6. Google Scholar A. Constantin and H. P. McKean, A shallow water equation on the circle,, Comm. Pure Appl. Math., 52 (1998), 949. doi: 10.1002/(SICI)1097-0312(199908)52:8<949::AID-CPA3>3.0.CO;2-D. Google Scholar A. Constantin and L. Molinet, Global weak solutions for a shallow water equation,, Comm. Math. Phys., 211 (2000), 45. doi: 10.1007/s002200050801. Google Scholar G. Coclite, H. Holden and K. Karlsen, Global weak solutions to a generalized hyperelastic-rod wave equation,, SIAM J. Math. Anal., 37 (2005), 1044. doi: 10.1137/040616711. Google Scholar R. Danchin, A few remarks on the Camassa-Holm equation,, Differential Integral Equations, 14 (2001), 953. Google Scholar J. Escher, O. Lechtenfeld and Z. Yin, Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation,, Discrete Contin. Dyn. Syst., 19 (2007), 493. Google Scholar J. Escher and Z. Yin, Initial boundary value problems of the Camassa-Holm equation,, Commun. Partial Differential Equation, 33 (2008), 377. Google Scholar J. Escher and Z. Yin, Initial boundary value problems for nonlinear dispersive wave equations,, J. Funct. Anal., 256 (2009), 479. doi: 10.1016/j.jfa.2008.07.010. Google Scholar Y. Fu and C. Qu, Well-posedness and blow-up solution for a new coupled Camassa-Holm system with peakons,, J. Math. Phys., 50 (2009). Google Scholar Y. Fu, Y. Liu and C. Qu, Well-posedness and blow-up solution for a modified two-component periodic Camassa-Holm system with peakons,, Math. Ann., 348 (2010), 415. doi: 10.1007/s00208-010-0483-9. Google Scholar C. Guan and Z. Yin, Global existence and blow-up phenomena for an integrable two-component Camassa-Holm shallow water system,, J. Differential Equations, 248 (2010), 2003. Google Scholar C. Guan, K. Karlsen and Z. Yin, Well-posedness and blow-up phenomena for a modified two-component Camassa-Holm equation,, Contemporary Mathematics, 526 (2010). Google Scholar G. Gui and Y. Liu, On the cauchy problem for the two-component Camassa-Holm system,, Math. Z., (). doi: 10.1007/s00209-009-0660-2. Google Scholar G. Gui and Y. Liu, On the global existence and wave-breaking criteria for the two-component Camassa-Holm system,, J. Funct. Anal., 258 (2010), 4251. doi: 10.1016/j.jfa.2010.02.008. Google Scholar D. Henry, Infinite propagation speed for a two component Camassa-Holm equation,, Discrete Contin. Dyn. Syst. Ser. B, 12 (2009), 597. doi: 10.3934/dcdsb.2009.12.597. Google Scholar H. Holden and X. Raynaud, Global conservation solutions of the Camassa-Holm equation-A lagrangian point of view,, Commun. Partial Differential Equation, 32 (2007), 1511. Google Scholar D. Holm, L. Naraigh and C. Tronci, Singular solution of a modified two component Camassa-Holm equation,, Phy. Rev. E, 79 (2009), 1. doi: 10.1103/PhysRevE.79.016601. Google Scholar D. Ionescu-Kruse, Variational derivation of the Camassa-Holm shallow water equation,, J. Nonlinear Math. Phys., 14 (2007), 303. doi: 10.2991/jnmp.2007.14.3.1. Google Scholar R. Iorio and de Magãlhaes Iorio, "Fourier Analysis and Partial Differential Equation,", Cambridge University Press, (2001). Google Scholar R. S. Johnson, Camassa-Holm, Korteweg-de Vries and related models for water waves,, J. Fluid Mech., 457 (2002), 63. Google Scholar T. Kato, Quasi-linear equation of evolution, with application to partial differential equation,, in, 448 (1975), 25. Google Scholar S. Kouranbaeva, The Camassa-Holm equation as a geodesic flow on the diffeomorphism group,, J. Math. Phys., 40 (1999), 857. doi: 10.1063/1.532690. Google Scholar M. Lakshmanan, Integrable nonlinear wave equations and possible connections to tsunami dynamics,, Tsunami and Nonlinear Waves, (2007), 31. doi: 10.1007/978-3-540-71256-5_2. Google Scholar Y. Li and P. Oliver, Well-posedness and blow-up solutions for an integrable nonlinear dispersive model wave equation,, J. Differential Equations, 162 (2000), 27. Google Scholar J. L. Lions, "Quelques Méthodes de Résolution des Problèaux Limites Nonlinéaires,", Dunod, (1969). Google Scholar H. P. McKean, Integrable systems and algebraic curves,, Lecture Notes in Math., 755 (1979), 83. Google Scholar G. Misiolek, A shallow water equation as a geodesic flow on the Bott-Virasoro group,, J. Geom. Phys., 24 (1998), 203. doi: 10.1016/S0393-0440(97)00010-7. Google Scholar V. Ovsienko and B. Khesin, The super Korteweg-de Vries equation as an Euler equation,, Funktsional. Anal. i Prilozhen, 21 (1987), 81. Google Scholar H. Segur, Waves in shallow water, with emphasis int the tsunami of 2004,, Tsunami and Nonlinear Waves, (2007), 31. Google Scholar W. Walter, "Differential and Integral Inequalities,", "Differential and Integral Inequalities,", (1970). Google Scholar Z. Xin and P. Zhang, On the weak solutions to a shallow water equation,, Comm. Pure Appl. Math., 53 (2000), 1411. doi: 10.1002/1097-0312(200011)53:11<1411::AID-CPA4>3.0.CO;2-5. Google Scholar Z. Yin, Well-posedness, global existence phenomena for an integrable shallow water equation,, Discrete Contin. Dyn. Syst., 10 (2004), 393. doi: 10.3934/dcds.2004.11.393. Google Scholar Z. Yin, Global existence for a new periodic integrable equation,, J. Math. Anal. Appl., 283 (2003), 129. doi: 10.1016/S0022-247X(03)00250-6. Google Scholar Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087 Zaihui Gan, Fanghua Lin, Jiajun Tong. On the viscous Camassa-Holm equations with fractional diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3427-3450. doi: 10.3934/dcds.2020029 Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377 Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318 Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391 Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264 Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039 Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052 Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302 Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361 Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248 Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142 Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161 Zhihua Liu, Yayun Wu, Xiangming Zhang. Existence of periodic wave trains for an age-structured model with diffusion. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021009 Sze-Bi Hsu, Yu Jin. The dynamics of a two host-two virus system in a chemostat environment. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 415-441. doi: 10.3934/dcdsb.2020298 Guangying Lv Mingxin Wang
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Study and validation of a new "3D Calorimetry" of hot nuclei with the HIPSE event generator (1804.07552) E. Vient, L. Manduci, E. Legouée, L. Augey, E. Bonnet, B. Borderie, R. Bougault, A. Chbihi, D.Dell'Aquila, Q. Fable, L. Francalanza, J.D. Frankland, E. Galichet, D. Gruyer, D. Guinet, M. Henri, M. La Commara, G. Lehaut, N. Le Neindre, I. Lombardo, O. Lopez, P. Marini, M. Parlog, M. F. Rivet, E. Rosato, R. Roy, P. St-Onge, G. Spadaccini, G. Verde, M. Vigilante April 20, 2018 nucl-ex In nuclear thermodynamics, the determination of the excitation energy of hot nuclei is a fundamental experimental problem. Instrumental physicists have been trying to solve this problem for several years by building the most exhaustive 4Pi detector arrays and perfecting their calorimetry techniques. In a recent paper, a proposal for a new calorimetry, called "3D calorimetry", was made. It tries to optimize the separation between the particles and fragments emitted by the Quasi-Projectile and the other possible contributions. This can be achieved by determining the experimental probability for a given nucleus of a nuclear reaction to be emitted by the Quasi-Projectile. It has been developed for the INDRA data. In the present work, we wanted to dissect and validate this new method of characterization of a hot Quasi-Projectile. So we tried to understand and control it completely to determine these limits. Using the HIPSE event generator and a software simulating the functioning of INDRA, we were able to achieve this goal and provide a quantitative estimation of the quality of the QP characterization. Experimental study of precisely selected evaporation chains in the decay of excited $^{25}$Mg (1804.06294) A. Camaiani, G. Casini, L. Morelli, S. Barlini, S. Piantelli, G. Baiocco, M. Bini, M. Bruno, A. Buccola, M. Cinausero, M. Cicerchia, M. D'Agostino, M. Degelier, D. Fabris, C. Frosin, F. Gramegna, F. Gulminelli, G. Mantovani, T. Marchi, A. Olmi, P. Ottanelli, G. Pasquali, G. Pastore, S. Valdre, G. Verde The reaction $^{12}$C + $^{13}$C at 95 MeV bombarding energy is studied using the GARFIELD + Ring Counter apparatus located at the INFN Laboratori Nazionali di Legnaro. In this paper we want to investigate the de-excitation of $^{25}$Mg aiming both at a new stringent test of the statistical description of nuclear decay and a direct comparison with the decay of the system $^{24}$Mg formed through $^{12}$C+$^{12}$C reactions previously studied. Thanks to the large acceptance of the detector and to its good fragment identification capabilities, we could apply stringent selections on fusion-evaporation events, requiring their completeness in charge. The main decay features of the evaporation residues and of the emitted light particles are overall well described by a pure statistical model; however, as for the case of the previously studied 24Mg, we observed some deviations in the branching ratios, in particular for those chains involving only the evaporation of $\alpha$ particles. From this point of view the behavior of the $^{24}$Mg and $^{25}$Mg decay cases appear to be rather similar. An attempt to obtain a full mass balance even without neutron detection is also discussed. Isospin influence on dynamical production of Intermediate Mass Fragments at Fermi Energies (1803.03046) P. Russotto, E. De Filippo, E.V. Pagano, L. Acosta, L. Auditore, T. Cap, G. Cardella, S. De Luca, B. Gnoffo, G. Lanzalone, I. Lombardo, C. Maiolino, N.S. Martorana, T. Minniti, S. Norella, A. Pagano, M. Papa, E. Piasecki, S. Pirrone, G. Politi, F. Porto, L. Quattrocchi, F. Rizzo, E. Rosato, K. Siwek-Wilczyńska, A. Trifirò, M. Trimarchi, G. Verde, J.Wilczyński March 8, 2018 nucl-ex The Intermediate Mass Fragments emission probability from Projectile-Like Fragment break-up in semi-peripheral reactions has been measured in collisions of $^{124}$Xe projectiles with two different targets of $^{64}$Ni and $^{64}$Zn at the laboratory energy of 35 \amev. The two colliding systems differ only for the target atomic number Z and, consequently, for the Isospin $N/Z$ ratio. An enhancement of Intermediate Mass Fragments production for the neutron rich $^{64}$Ni target, with respect to the $^{64}$Zn, is found. In the case of one Intermediate Mass Fragment emission, the contributions of the dynamical and statistical emissions have been evaluated, showing that the increase of the effect above is due to an enhancement of the dynamical emission probability, especially for heavy IMFs (Z$\gtrsim$ 7). This proves an influence of the target Isospin on inducing the dynamical fragment production from Projectile-Like Fragment break-up. In addition, a comparison of the Xe+Ni,Zn results with the previously studied $^{112,124}Sn+^{58,64}Ni$ systems is discussed in order to investigate the influence of the projectile Isospin alone and to disentangle between Isospin effects against system-size effects on the emission probability. These comparisons suggest that the prompt-dynamical emission is mainly ruled by the $N/Z$ content of, both, projectile and target; for the cases here investigated, the influence of the system size on the dynamical emission probability can be excluded. Charge reconstruction in large-area photomultipliers (1801.08690) M. Grassi, M. Montuschi, M. Baldoncini, F. Mantovani, B. Ricci, G. Andronico, V. Antonelli, M. Bellato, E. Bernieri, A.Brigatti, R. Brugnera, A. Budano, M. Buscemi, S. Bussino, R. Caruso, D. Chiesa, D. Corti, F. Dal Corso, X. F. Ding, S. Dusini, A.Fabbri, G.Fiorentini, R. Ford, A. Formozov, G. Galet, A. Garfagnini, M. Giammarchi, A. Giaz, A. Insolia, R. Isocrate, I. Lippi, F. Longhitano, D. Lo Presti, P. Lombardi, F. Marini, S. M. Mari, C. Martellini, E. Meroni, M. Mezzetto, L. Miramonti, S. Monforte, M. Nastasi, F. Ortica, A. Paoloni, S. Parmeggiano, D. Pedretti, N. Pelliccia, R. Pompilio, E. Previtali, G. Ranucci, A. C. Re, A. Romani, P. Saggese, G. Salamanna, F. H. Sawy, G. Settanta, M. Sisti, C. Sirignano, M. Spinetti, L. Stanco, V. Strati, G. Verde, L. Votano Jan. 26, 2018 physics.ins-det Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions ---such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from $0.769 \pm 0.001$ (without deconvolution) to $0.989 \pm 0.001$ (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at http://www.fe.infn.it/CRA . A new "3D Calorimetry" of hot nuclei (1709.07396) E. Vient, L. Manduci, E. Legouée, L. Augey, E. Bonnet, B. Borderie, R. Bougault, A. Chbihi, D. Dell'Aquila, Q. Fable, L. Francalanza, J.D. Frankland, E. Galichet, D. Gruyer, D. Guinet, M. Henri, M. La Commara, G. Lehaut, N. Le Neindre, I. Lombardo, O. Lopez, P. Marini, M. Parlog, M. F. Rivet, E. Rosato, R. Roy, P. St-Onge, G. Spadaccini, G. Verde, M. Vigilante Sept. 21, 2017 nucl-ex, physics.data-an In the domain of Fermi energy, it is extremely complex to isolate experimentally fragments and particles issued from the cooling of a hot nucleus produced during a heavy ion collision. This paper presents a new method to characterize more precisely hot Quasi-Projectiles. It tries to take into account as accurately as possible the distortions generated by all the other potential participants in the nuclear reaction. It is quantitatively shown that this method is a major improvement respect to classic calorimetries used with a 4$\pi$ detector array. By detailing and deconvolving the different steps of the reconstitution of the hot nucleus, this study shows also the respective role played by the experimental device and the event selection criteria on the quality of the determination of QP characteristics. Improving isotopic identification with \emph{INDRA} Silicon-CsI(\emph{Tl}) telescopes (1707.08863) O. Lopez, M. Parlog, B. Borderie, M.F. Rivet, G. Lehaut, G. Tabacaru, L. Tassan-got, P. Pawlowski, E. Bonnet, R. Bougault, A. Chbihi, D. Dell'Aquila, J.D. Frankland, E. Galichet, D. Gruyer, M. La Commara, N. Le Neindre, I. Lombardo, L. Manduci, P. Marini, J.C. Steckmeyer, G. Verde, E. Vient, J.P. Wieleczko July 27, 2017 nucl-ex, physics.ins-det Profiting from previous works done with the \emph{INDRA} multidetector on the description of the light response $\mathcal L$ of the CsI(\emph{Tl}) crystals to different impinging nuclei, we propose an improved $\Delta E - \mathcal L$ identification-calibration procedure for Silicon-Cesium Iodide (Si-CsI) telescopes, namely an Advanced Mass Estimate (\emph{AME}) method. \emph{AME} is compared to the usual, %$"\Delta E - E"$ simple visual analysis of the corresponding two-dimensional map of $\Delta E - E$ type, by using \emph{INDRA} experimental data from nuclear reactions induced by heavy ions in the Fermi energy regime. We show that the capability of such telescopes to identify both the atomic $Z$ and the mass $A$ numbers of light and heavy reaction products, can be quantitatively improved thanks to the proposed approach. This conclusion opens new possibilities to use \emph{INDRA} for studying these reactions especially with radioactive beams. Indeed, the determination of the mass for charged reaction products becomes of paramount importance to shed light on the role of the isospin degree of freedom in the nuclear equation of state. Understand the thermometry of hot nuclei from the energy spectra of light charged particles (1707.01264) E. Vient, L. Augey, B. Borderie, A. Chbihi, D. Dell'Aquila, Q. Fable, L. Francalanza, J.D. Frankland, E. Galichet, D. Gruyer, D. Guinet, M. Henri, M. La Commara, E. Legouée, G. Lehaut, N. Le Neindre, I. Lombardo, O. Lopez, L. Manduci, P. Marini, M. Parlog, M. F. Rivet, E. Rosato, R. Roy, P. St-Onge, G. Spadaccini, G. Verde, M. Vigilante July 5, 2017 nucl-ex In the domain of Fermi energy, the hot nucleus temperature can be determined by using the energy spectra of evaporated light charged particles. But this method of measurement is not without difficulties both theoretical and experimental. The presented study aims to disentangle the respective influences of different factors on the quality of this measurement : the physics, the detection (a 4? detector array as INDRA) and the experimental procedure. This analysis demonstrates the possibility of determining from an energy spectrum, with an accuracy of about 10 %, the true apparent temperature felt by a given type of particle emitted by a hot nucleus. Three conditions are however necessary : have a perfect detector of particles, an important statistics and very few secondary emissions. According to the GEMINI event generator, for hot nuclei of intermediate mass, only deuterons and tritons could fill these conditions. This temperature can allow to trace back to the initial temperature by using an appropriate method. This determination may be better than 15 %. With a real experimental device, an insufficient angular resolution and topological distortions caused by the detection can damage spectra to the point to make very difficult a correct determination of the apparent temperature. The experimental reconstruction of the frame of the hot nucleus may also be responsible for this deterioration High precision probe of the fully sequential decay width of the Hoyle state in $^{12}$C (1705.09196) D. Dell'Aquila, I. Lombardo, G. Verde, M. Vigilante, L. Acosta, C. Agodi, F. Cappuzzello, D. Carbone, M. Cavallaro, S. Cherubini, A. Cvetinovic, G. D'Agata, L. Francalanza, G.L. Guardo, M. Gulino, I. Indelicato, M. La Cognata, L. Lamia, A. Ordine, R.G. Pizzone, S.M.R. Puglia, G.G. Rapisarda, S. Romano, G. Santagati, R. Spartà, G. Spadaccini, C. Spitaleri, A. Tumino June 18, 2017 nucl-ex The decay path of the Hoyle state in $^{12}$C ($E_x=7.654\textrm{MeV}$) has been studied with the $^{14}\textrm{N}(\textrm{d},\alpha_2)^{12}\textrm{C}(7.654)$ reaction induced at $10.5\textrm{MeV}$. High resolution invariant mass spectroscopy techniques have allowed to unambiguously disentangle direct and sequential decays of the state passing through the ground state of $^{8}$Be. Thanks to the almost total absence of background and the attained resolution, a fully sequential decay contribution to the width of the state has been observed. The direct decay width is negligible, with an upper limit of $0.043\%$ ($95\%$ C.L.). The precision of this result is about a factor $5$ higher than previous studies. This has significant implications on nuclear structure, as it provides constraints to $3$-$\alpha$ cluster model calculations, where higher precision limits are needed. Light charged clusters emitted in 32 MeV/nucleon 136,124Xe+124,112Sn reactions: chemical equilibrium, 3He and 6He production (1703.03694) R. Bougault, E. Bonnet, B. Borderie, A. Chbihi, D. Dell'Aquila, Q. Fable, L. Francalanza, J.D. Frankland, E. Galichet, D. Gruyer, D. Guinet, M. Henri, M. La Commara, N. Le Neindre, I. Lombardo, O. Lopez, L. Manduci, P. Marini, M. Parlog, R. Roy, P. Saint-Onge, G. Verde, E. Vient, M. Vigilante March 10, 2017 nucl-ex Nuclear particle production from peripheral to central events is presented. N/Z gradient between projectile and target is studied using the fact that two reactions have the same projectile+target N/Z and so the same neutron to proton ratio for the combined system. Inclusive data study in the forward part of the center of mass indicates that N/Z equilibration between the projectile-like and the target-like is achieved for central collisions. Particles are also produced from mid-rapidity region. 3He mean pre-equilibrium character is evidenced and 6He production at mid-rapidity implies a neutron enrichment phenomenon of the projectile target interacting zone. Results of the ASY-EOS experiment at GSI: The symmetry energy at suprasaturation density (1608.04332) P. Russotto, S. Gannon, S. Kupny, P. Lasko, L. Acosta, M. Adamczyk, A. Al-Ajlan, M. Al-Garawi, S. Al-Homaidhi, F. Amorini, L. Auditore, T. Aumann, Y. Ayyad, Z. Basrak, J. Benlliure, M. Boisjoli, K. Boretzky, J. Brzychczyk, A. Budzanowski, C. Caesar, G. Cardella, P. Cammarata, Z. Chajecki, M. Chartier, A. Chbihi, M. Colonna, M. D. Cozma, B. Czech, E. De Filippo, M. Di Toro, M. Famiano, I. Gašparić, L. Grassi, C. Guazzoni, P. Guazzoni, M. Heil, L. Heilborn, R. Introzzi, T. Isobe, K. Kezzar, M. Kiš, A. Krasznahorkay, N. Kurz, E. La Guidara, G. Lanzalone, A. Le Fèvre, Y. Leifels, R. C. Lemmon, Q. F. Li, I. Lombardo, J. Lukasik, W. G. Lynch, P. Marini, Z. Matthews, L. May, T. Minniti, M. Mostazo, A. Pagano, E. V. Pagano, M. Papa, P. Pawlowski, S. Pirrone, G. Politi, F. Porto, W. Reviol, F. Riccio, F. Rizzo, E. Rosato, D. Rossi, S. Santoro, D. G. Sarantites, H. Simon, I. Skwirczynska, Z. Sosin, L. Stuhl, W. Trautmann, A. Trifirò, M. Trimarchi, M. B. Tsang, G. Verde, M. Veselsky, M. Vigilante, Yongjia Wang, A. Wieloch, P. Wigg, J. Winkelbauer, H. H. Wolter, P. Wu, S. Yennello, P. Zambon, L. Zetta, M. Zoric Sept. 27, 2016 nucl-ex Directed and elliptic flows of neutrons and light charged particles were measured for the reaction 197Au+197Au at 400 MeV/nucleon incident energy within the ASY-EOS experimental campaign at the GSI laboratory. The detection system consisted of the Large Area Neutron Detector LAND, combined with parts of the CHIMERA multidetector, of the ALADIN Time-of-flight Wall, and of the Washington-University Microball detector. The latter three arrays were used for the event characterization and reaction-plane reconstruction. In addition, an array of triple telescopes, KRATTA, was used for complementary measurements of the isotopic composition and flows of light charged particles. From the comparison of the elliptic flow ratio of neutrons with respect to charged particles with UrQMD predictions, a value \gamma = 0.72 \pm 0.19 is obtained for the power-law coefficient describing the density dependence of the potential part in the parametrization of the symmetry energy. It represents a new and more stringent constraint for the regime of supra-saturation density and confirms, with a considerably smaller uncertainty, the moderately soft to linear density dependence deduced from the earlier FOPI-LAND data. The densities probed are shown to reach beyond twice saturation. Semi-automatic charge and mass identification in two-dimensional matrices (1607.08529) D. Gruyer, A. Chbihi, S. Barlini (INFN, Sezione di Firenze), B. Borderie, J. A. Duenas, N. Le Neindre, S. Piantelli, G. Verde, T. Kozik, M Pârlog Aug. 4, 2016 nucl-ex, physics.ins-det This article presents a new semi-automatic method for charge and mass identification in two-dimensional matrices. The proposed algorithm is based on the matrix's properties and uses as little information as possible on the global form of the identification lines, making it applicable to a large variety of matrices, including Particular attention has been paid to the implementation in a suitable graphical environment, so that only two mouse-clicks are required from the user to calculate all initialization parameters. Example applications to recent data from both INDRA and FAZIA telescopes are presented. Reaction and fusion cross sections for the near-symmetric system $^{129}Xe+^{nat}Sn$ from $8$ to $35$ $AMeV$ (1607.02900) L. Manduci, O. Lopez, A. Chbihi, M.F. Rivet, R. Bougault, J.D. Frankland, B. Borderie, E. Galichet, M. La Commara, N. Le Neindre, I. Lombardo, M. Pârlog, E. Rosato, R. Roy, G. Verde, E. Vient July 11, 2016 nucl-ex \item[Background]Heavy-ion reactions from barrier up to Fermi energy. \item[Purpose]Reaction and fusion cross sections determination. Fusion reactions induced by $^{129}Xe$ projectiles on $^{nat}Sn$ targets for energies ranging from $8$ A.MeV to $35$ A.MeV were measured with the INDRA $4\pi$-array.\\ The evaluation of the fusion/incomplete fusion cross sections for the incident energies from 8 to 35 A.MeV is the main purpose of this paper. \item[Method] The reaction cross sections are evaluated for each beam energy thanks to INDRA $4\pi$-array. The events are also sorted in order to focus the study on a selected sample of events, in such a way that the fusion/fusion incomplete cross section is estimated. \item[Results] The excitation function of reaction and fusion cross sections were measured for the heavy and nearly symmetric system $^{129}Xe + ^{nat}Sn$ from 8 to 35 A.MeV. \item[Conclusions] The fusion-like cross-sections evaluated show a good agrement with a recent systematics for beam energies greater than 20 A.MeV. For low beam energies the cross-section values are lower than the expected ones. A probable reason for these low values is in the fusion hindrance at energies above/close the barrier. Dipolar degrees of freedom and Isospin equilibration processes in Heavy Ion collisions (1501.00801) M.Papa, I. Berceanu, L. Acosta, F. Amorini, C. Agodi, A. Anzalone, L. Auditore, G. Cardella, S. Cavallaro, M.B. Chatterjee, E. De Filippo, L. Francalanza, E. Geraci, L. Grassi, B. Gnoffo, J. Han, E. La Guidara, G. Lanzalone, I. Lombardo, C. Maiolino T. Minniti A. Pagano, E.V. Pagano, S. Pirrone, G. Politi, F. Porto, L. Quattrocchi, F. Rizzo, E. Rosato, P. Russotto, A. Trifirò, M. Trimarchi, G. Verde, M. Vigilante Jan. 5, 2015 nucl-ex Background: In heavy ion collision at the Fermi energies Isospin equilibration processes occur- ring when nuclei with different charge/mass asymmetries interacts have been investigated to get information on the nucleon-nucleon Iso-vectorial effective interaction. Purpose: In this paper, for the system 48Ca +27 Al at 40 MeV/nucleon, we investigate on this process by means of an observable tightly linked to isospin equilibration processes and sensitive in exclusive way to the dynamical stage of the collision. From the comparison with dynamical model calculations we want also to obtain information on the Iso-vectorial effective microscopic interaction. Method: The average time derivative of the total dipole associated to the relative motion of all emitted charged particles and fragments has been determined from the measured charges and velocities by using the 4? multi-detector CHIMERA. The average has been determined for semi- peripheral collisions and for different charges Zb of the biggest produced fragment. Experimental evidences collected for the systems 27Al+48Ca and 27Al+40Ca at 40 MeV/nucleon used to support this novel method of investigation are also discussed. Signals of Bose Einstein condensation and Fermi quenching in the decay of hot nuclear systems (1501.00595) P. Marini, H. Zheng, M. Boisjoli, G. Verde, A. Chbihi, G. Ademard, L. Auger, C. Bhattacharya, B. Borderie, R. Bougault, J. Frankland, E. Galichet, D. Gruyer, S. Kundu, M. La Commara, I. Lombardo, O. Lopez, G. Mukherjee, P. Napolitani, M. Parlog, M. F. Rivet, E. Rosato, R. Roy, G. Spadaccini, M. Vigilante, P. C. Wigg, A. Bonasera We report experimental signals of Bose-Einstein condensation in the decay of hot Ca projectile-like sources produced in mid-peripheral collisions at sub-Fermi energies. The experimental setup, constituted by the coupling of the INDRA 4$\pi$ detector array to the forward angle VAMOS magnetic spectrometer, allowed us to reconstruct the mass, charge and excitation energy of the decaying hot projectile-like sources. Furthermore, by means of quantum fluctuation analysis techniques, temperatures and mean volumes per particle "as seen by" bosons and fermions separately are correlated to the excitation energy of the reconstructed system. The obtained results are consistent with the production of dilute mixed (bosons/fermions) systems, where bosons experience a smaller volume as compared to the surrounding fermionic gas. Our findings recall similar phenomena observed in the study of boson condensates in atomic traps. Effective Nucleon Masses from Heavy Ion Collisions (1406.4546) D.D.S. Coupland, M. Youngs, W.G. Lynch, M.B. Tsang, Z. Chajecki, Y. X. Zhang, M.A. Famiano, T.K. Ghosh, B. Giacherio, M. A. Kilburn, Jenny Lee, F. Lu, P. Russotto, A. Sanetullaev, R. H. Showalter, G. Verde, J. Winkelbauer We probe the momentum dependence of the isovector mean-field potential by comparing the energy spectra of neutrons and protons emitted in $^{112}$Sn+$^{112}$Sn and $^{124}$Sn+$^{124}$Sn collisions at incident energies of E/A=50 and 120 MeV. We achieve experimental precision that discriminates between different momentum dependencies for the symmetry mean-field potential. Comparisons of the experimental results to Improved Quantum Molecular Dynamics model calculations with Skyrme Interactions indicate small differences between the neutron and proton effective masses. Evidence for a Novel Reaction Mechanism of a Prompt Shock-Induced Fission Following the Fusion of 78Kr and 40Ca Nuclei at E/A =10 MeV (1404.3758) E. Henry, J. Toke, S. Nyibule, M. Quinlan, W.U. Schroder, G. Ademard, F. Amorini, L. Auditore, C. Beck, I. Berceanu, E. Bonnet, B. Borderie, G. Cardella, A. Chbihi, M. Colonna, E. De Filippo, A. DOnofrio, J.D. Frankland, E. Geraci, E. La Guidara, M. La Commara, G. Lanzalone, P. Lautesse, D. Lebhertz, N. Le Neindre, I. Lombardo, D. Loria, K. Mazurek, A. Pagano, M. Papa, E. Piasecki, S. Pirrone, G. Politi, F. Porto, F. Rizzo, E. Rosato, P. Rusotto, G. Spadaccini, A. Trifiro, M. Trimarchi, G. Verde, M. Vigilante, J.P. Wieleczko An analysis of experimental data from the inverse-kinematics ISODEC experiment on 78Kr+40Ca reaction at a bombarding energy of 10 AMeV has revealed signatures of a hitherto unknown reaction mechanism, intermediate between the classical damped binary collisions and fusion-fission, but also substantially different from what is being termed in the literature as fast fission or quasi fission. These signatures point to a scenario where the system fuses transiently while virtually equilibrating mass asymmetry and energy and, yet, keeping part of the energy stored in a collective shock-imparted and, possibly, angular momentum bearing form of excitation. Subsequently the system fissions dynamically along the collision or shock axis with the emerging fragments featuring a broad mass spectrum centered around symmetric fission, relative velocities somewhat higher along the fission axis than in transverse direction, and virtually no intrinsic spin. The class of massasymmetric fission events shows a distinct preference for the more massive fragments to proceed along the beam direction, a characteristic reminiscent of that reported earlier for dynamic fragmentation of projectile-like fragments alone and pointing to the memory of the initial mass and velocity distribution. Scaling properties of light-cluster production (1402.5216) Z.Chajecki, M.Youngs, D.D.S.Coupland, W.G.Lynch, M.B.Tsang, D. Brown, A. Chbihi, P. Danielewicz, R.T. deSouza, M.A. Famiano, T.K. Ghosh, B. Giacherio, V. Henzl, D. Henzlova, C. Herlitzius, S. Hudan, M. A. Kilburn, Jenny Lee, F. Lu, S. Lukyanov, A.M. Rogers, P. Russotto, A. Sanetullaev, R. H. Showalter, L.G. Sobotka, Z.Y. Sun, A.M. Vander Molen, G. Verde, M.S. Wallace, J. Winkelbauer Feb. 21, 2014 nucl-ex We show that ratios of light-particle energy spectra display scaling properties that can be accu- rately described by effective local chemical potentials. This demonstrates the equivalence of t/3He and n/p spectral ratios and provides an essential test of theoretical predictions of isotopically resolved light-particle spectra. In addition, this approach allows direct comparisons of many theoretical n/p spectral ratios to experiments where charged-particle spectra but not neutron spectra are accurately measured. Such experiments may provide much more quantitative constraints on the density and momentum dependence of the symmetry energy. Kinematical coincidence method in transfer reactions (1212.4593) L. Acosta, F. Amorini, L. Auditore, I. Berceanu, G. Cardella, M. B. Chatterjiee, E. De Filippo, L. FrancalanzA, R. Gianì, L. Grassi, A. Grzeszczuk, E. La Guidara, G. Lanzalone, I. Lombardo, D. Loria, T. Minniti, E. V. Pagano, M. Papa, S. Pirrone, G. Politi, A. Pop, F. Porto, F. Rizzo, E. Rosato, P. Russotto, S. Santoro, A. Trifirò, M. Trimarchi, G. Verde, M. Vigilante Dec. 20, 2012 nucl-ex A new method to extract high resolution angular distributions from kinematical coincidence measurements in binary reactions is presented. Kinematic is used to extract the center of mass angular distribution from the measured energy spectrum of light particles. Results obtained in the case of 10Be+p-->9Be+d reaction measured with the CHIMERA detector are shown. An angular resolution of few degrees in the center of mass is obtained. Correlations between isospin dynamics and Intermediate Mass Fragments emission time scales: a probe for the symmetry energy in asymmetric nuclear matter (1209.6461) E. De Filippo, F. Amorini, L. Auditore, V. Baran, I. Berceanu, G. Cardella, M. Colonna, E. Geraci, S. Gianì, L. Grassi, A. Grzeszczuk, P. Guazzoni, J. Han, E. La Guidara, G. Lanzalone, I. Lombardo, C. Maiolino, T. Minniti, A. Pagano, M. Papa, E. Piasecki, S. Pirrone, G. Politi, A. Pop, F. Porto, F. Rizzo, P. Russotto, S. Santoro, A. Trifirò, M. Trimarchi, G. Verde, M. Vigilante, J. Wilczyński, L. Zetta We show new data from the $^{64}$Ni+$^{124}$Sn and $^{58}$Ni+$^{112}$Sn reactions studied in direct kinematics with the CHIMERA detector at INFN-LNS and compared with the reverse kinematics reactions at the same incident beam energy (35 A MeV). Analyzing the data with the method of relative velocity correlations, fragments coming from statistical decay of an excited projectile-like (PLF) or target-like (TLF) fragments are discriminated from the ones coming from dynamical emission in the early stages of the reaction. By comparing data of the reverse kinematics experiment with a stochastic mean field (SMF) + GEMINI calculations our results show that observables from neck fragmentation mechanism add valuable constraints on the density dependence of symmetry energy. An indication is found for a moderately stiff symmetry energy potential term of EOS. Evolution of the decay mechanisms in central collisions of $Xe$ + $Sn$ from $E/A$ = 8 to 29 $MeV$ (1209.6164) A. Chbihi, L. Manduci, J. Moisan, E. Bonnet, J. D. Frankland, R. Roy, G. Verde Collisions of Xe+Sn at beam energies of $E/A$ = 8 to 29 $MeV$ and leading to fusion-like heavy residues are studied using the $4\pi$ INDRA multidetector. The fusion cross section was measured and shows a maximum at $E/A$ = 18-20 $MeV$. A decomposition into four exit-channels consisting of the number of heavy fragments produced in central collisions has been made. Their relative yields are measured as a function of the incident beam energy. The energy spectra of light charged particles (LCP) in coincidence with the fragments of each exit-channel have been analyzed. They reveal that a composite system is formed, it is highly excited and first decays by emitting light particles and then may breakup into 2- or many- fragments or survives as an evaporative residue. A quantitative estimation of this primary emission is given and compared to the secondary decay of the fragments. These analyses indicate that most of the evaporative LCP precede not only fission but also breakup into several fragments. The ASY-EOS experiment at GSI: investigating the symmetry energy at supra-saturation densities (1209.5961) P. Russotto, M. Chartier, E. De Filippo, A. Le Févre, S. Gannon, I. Gašparić, M. Kiš, S. Kupny, Y. Leifels, R.C. Lemmon, J. Łukasik, P. Marini, A. Pagano, P. Pawłowski, S. Santoro, W. Trautmann, M. Veselsky, L. Acosta, M. Adamczyk, A. Al-Ajlan, M. Al-Garawi, S. Al-Homaidhi, F. Amorini, L. Auditore, T. Aumann, Y. Ayyad, V. Baran, Z. Basrak, J. Benlliure, C. Boiano, M. Boisjoli, K. Boretzky, J. Brzychczyk, A. Budzanowski, G. Cardella, P. Cammarata, Z. Chajecki, A. Chbihi, M. Colonna, D. Cozma, B. Czech, M. Di Toro, M. Famiano, E. Geraci, V. Greco, L. Grassi, C. Guazzoni, P. Guazzoni, M. Heil, L. Heilborn, R. Introzzi, T. Isobe, K. Kezzar, A. Krasznahorkay, N. Kurz, E. La Guidara, G. Lanzalone, P. Lasko, Q. Li, I. Lombardo, W. G. Lynch, Z. Matthews, L. May, T. Minniti, M. Mostazo, M. Papa, S. Pirrone, G. Politi, F. Porto, R. Reifarth, W. Reisdorf, F. Riccio, F. Rizzo, E. Rosato, D. Rossi, H. Simon, I. Skwirczynska, Z. Sosin, L. Stuhl, A. Trifiró, M. Trimarchi, M. B. Tsang, G. Verde, M. Vigilante, A. Wieloch, P. Wigg, H. H. Wolter, P. Wu, S. Yennello, P. Zambon, L. Zetta, M. Zoric The elliptic-flow ratio of neutrons with respect to protons in reactions of neutron rich heavy-ions systems at intermediate energies has been proposed as an observable sensitive to the strength of the symmetry term in the nuclear Equation Of State (EOS) at supra-saturation densities. The recent results obtained from the existing FOPI/LAND data for $^{197}$Au+$^{197}$Au collisions at 400 MeV/nucleon in comparison with the UrQMD model allowed a first estimate of the symmetry term of the EOS but suffer from a considerable statistical uncertainty. In order to obtain an improved data set for Au+Au collisions and to extend the study to other systems, a new experiment was carried out at the GSI laboratory by the ASY-EOS collaboration in May 2011. Isospin observables from fragment energy spectra (1208.3108) T. X. Liu, W. G. Lynch, R. H. Showalter, M. B. Tsang, X. D. Liu, W. P. Tan, M. J. van Goethem, G. Verde, A. Wagner, H. F. Xi, H. S. Xu, M. A. Famiano, R. T. de Souza, V. E. Viola, R. J. Charity, L. G. Sobotka Aug. 15, 2012 nucl-ex The energy spectra of light charged particles and intermediate mass fragments from 112Sn+112Sn and 124Sn+124Sn collisions at an incident energy of E/A=50 MeV have been measured with a large array of Silicon strip detectors. We used charged particle multiplicities detected in an array with nearly 4-pi coverage to select data from the central collision events. We study isospin observables analogous to ratios of neutron and proton spectra, including double ratios and yield ratios of t/3He and of asymmetries constructed from fragments with Z=3 to Z=8. Using the energy spectra, we can construct these observables as functions of kinetic energy. Most of the fragment asymmetry observables have a large sensitivity to sequential decays. Neutron recognition in the LAND detector for large neutron multiplicity (1203.5608) P. Pawłowski, J. Brzychczyk, Y. Leifels, W. Trautmann, P. Adrich, T. Aumann, C. O. Bacri, T. Barczyk, R. Bassini, S. Bianchin, C. Boiano, K. Boretzky, A. Boudard, A. Chbihi, J. Cibor, B. Czech, M. De Napoli, J.-E. Ducret, H. Emling, J. D. Frankland, T. Gorbinet, M. Hellström, D. Henzlova, S. Hlavac, J. Immè, I. Iori, H. Johansson, K. Kezzar, S. Kupny, A. Lafriakh, A. Le Fèvre, E. Le Gentil, S. Leray, J. Łukasik, J. Lühning, W. G. Lynch, U. Lynen, Z. Majka, M. Mocko, W. F. J. Müller, A. Mykulyak, H. Orth, A. N. Otte, R. Palit, S. Panebianco, A. Pullia, G. Raciti, E. Rapisarda, D. Rossi, M.-D. Salsac, H. Sann, C. Schwarz, H. Simon, C. Sfienti, K. Sümmerer, M. B. Tsang, G. Verde, M. Veselsky, C. Volant, M. Wallace, H. Weick, J. Wiechula, A. Wieloch, B. Zwiegliński The performance of the LAND neutron detector is studied. Using an event-mixing technique based on one-neutron data obtained in the S107 experiment at the GSI laboratory, we test the efficiency of various analytic tools used to determine the multiplicity and kinematic properties of detected neutrons. A new algorithm developed recently for recognizing neutron showers from spectator decays in the ALADIN experiment S254 is described in detail. Its performance is assessed in comparison with other methods. The properties of the observed neutron events are used to estimate the detection efficiency of LAND in this experiment. Correlations between emission timescale of fragments and isospin dynamics in $^{124}$Sn+$^{64}$Ni and $^{112}$Sn+$^{58}$Ni reactions at 35 AMeV (1206.0697) E. De Filippo, A. Pagano, P. Russotto, F. Amorini, A. Anzalone, L. Auditore, V. Baran, I. Berceanu, B. Borderie, R. Bougault, M. Bruno, T. Cap, G. Cardella, S. Cavallaro, M.B. Chatterjee, A. Chbihi, M. Colonna, M. D'Agostino, R. Dayras, M. Di Toro, J. Frankland, E. Galichet, W. Gawlikowicz, E. Geraci, A. Grzeszczuk, P. Guazzoni, S. Kowalski, E. La Guidara, G. Lanzalone, G. Lanzanò, N. Le Neindre, I. Lombardo, C. Maiolino, M. Papa, E. Piasecki, S. Pirrone, R. Planeta, G. Politi, A. Pop, F. Porto, M.F. Rivet, F. Rizzo, E. Rosato, K. Schmidt, K. Siwek-Wilczynska, I. Skwira-Chalot, A. Trifirò, M. Trimarchi, G. Verde, M. Vigilante, J.P. Wieleczko, J. Wilczynski, L. Zetta, W. Zipper June 4, 2012 nucl-ex We present a new experimental method to correlate the isotopic composition of intermediate mass fragments (IMF) emitted at mid-rapidity in semi-peripheral collisions with the emission timescale: IMFs emitted in the early stage of the reaction show larger values of $<$N/Z$>$ isospin asymmetry, stronger angular anisotropies and reduced odd-even staggering effects in neutron to proton ratio $<$N/Z$>$ distributions than those produced in sequential statistical emission. All these effects support the concept of isospin "migration", that is sensitive to the density gradient between participant and quasi-spectator nuclear matter, in the so called neck fragmentation mechanism. By comparing the data to a Stochastic Mean Field (SMF) simulation we show that this method gives valuable constraints on the symmetry energy term of nuclear equation of state at subsaturation densities. An indication emerges for a linear density dependence of the symmetry energy. Angular Dependence in Proton-Proton Correlation Functions in Central $^{40}Ca+^{40}Ca$ and $^{48}Ca+^{48}Ca$ Reactions (1108.2552) V. Henzl, M. A. Kilburn, Z. Chajecki, D. Henzlova, W. G. Lynch, D. Brown, A. Chbihi, D. Coupland, P. Danielewicz, R. deSouza, M. Famiano, C. Herlitzius, S. Hudan, Jenny Lee, S. Lukyanov, A. M. Rogers, A. Sanetullaev, L. Sobotka, Z. Y. Sun, M. B. Tsang, A. Vander Molen, G. Verde, M. Wallace, M. Youngs The angular dependence of proton-proton correlation functions is studied in central $^{40}Ca+^{40}Ca$ and $^{48}Ca+^{48}Ca$ nuclear reactions at E=80 MeV/A. Measurements were performed with the HiRA detector complemented by the 4$\pi$ Array at NSCL. A striking angular dependence in the laboratory frame is found within p-p correlation functions for both systems that greatly exceeds the measured and expected isospin dependent difference between the neutron-rich and neutron-deficient systems. Sources measured at backward angles reflect the participant zone of the reaction, while much larger sources observed at forward angles reflect the expanding, fragmenting and evaporating projectile remnants. The decrease of the size of the source with increasing momentum is observed at backward angles while a weaker trend in the opposite direction is observed at forward angles. The results are compared to the theoretical calculations using the BUU transport model.
CommonCrawl
A relation connecting two real numbers $ a _ {1} $ and $ a _ {2} $ by one of the symbols $ < $( less than), $ \leq $( less than or equal to), $ > $( greater than), $ \geq $( greater than or equal to), $ \neq $( unequal to), that is, $$ a _ {1} < a _ {2} ,\ \ a _ {1} \leq a _ {2} ,\ \ a _ {1} > a _ {2} ,\ \ a _ {1} \geq a _ {2} ,\ \ a _ {1} \neq a _ {2} . $$ Sometimes several inequalities are written together, for example, $$ a < b < c . $$ Inequalities have many properties in common with equalities. Thus, an inequality remains valid if one and the same number is added to (or subtracted from) both sides. Also, both sides can be multiplied by one and the same positive number. However, if both sides are multiplied by a negative number, the sign of an inequality is changed into the opposite (that is, $ > $ is replaced by $ < $ and $ > $ by $ < $). From $ A < B $ and $ C < D $ it follows that $ A + C < B + D $ and $ A - D < B - C $, that is, inequalities with equal signs ( $ A < B $ and $ C < D $) can be added term-by-term, and inequalities with opposite signs ( $ A < B $ and $ D > C $) subtracted term-by-term. If $ A $, $ B $, $ C $, and $ D $ are positive, then $ A < B $ and $ C < D $ imply $ AC < BD $ and $ A / D > B / C $, that is, inequalities with equal signs (between positive numbers) can be multiplied term-by-term and inequalities with opposite signs can be divided term-by-term. Inequalities in which there occur quantities that can take distinct numerical values may be true for some such values and false for others. For example, the inequality $ x ^ {2} - 4x + 3 > 0 $ is true for $ x = 4 $ and false for $ x = 2 $. For inequalities of this kind the question arises of their solution, that is, of the bounds within which one must take the quantities occurring in the inequality so as to make it valid. Thus, by writing $ x ^ {2} - 4x + 3 > 0 $ in the form $ ( x- 1 ) ( x- 3 ) > 0 $ one observes that it is true for all $ x $ with $ x < 1 $ or $ x > 3 $, which are then the solutions of the given inequality. Below some inequalities that hold identically in some range of variation of the variables occurring in them are given. 1) Inequality for the absolute value. For any real or complex numbers $ a _ {1} \dots a _ {n} $: $$ | a _ {1} + \dots + a _ {n} | \leq \ | a _ {1} | + \dots + | a _ {n} | . $$ 2) Inequality for means. There are well-known inequalities connecting the harmonic, geometric, arithmetic, and quadratic mean (cf. Arithmetic mean; Geometric mean; Harmonic mean; Quadratic mean): $$ \frac{n}{ \frac{1}{a _ {1} } + \dots + \frac{1}{a _ {n} } } \leq \ ( a _ {1} \dots a _ {n} ) ^ {1/n} \leq \ \frac{a _ {1} + \dots + a _ {n} }{n\ } \leq $$ $$ \leq \ \sqrt { \frac{a _ {1} ^ {2} + \dots + a _ {n} ^ {2} }{n} } . $$ Here all the numbers $ a _ {1} \dots a _ {n} $ must be positive. 3) Inequalities for sums and their integral analogues. Such are, for example, the Bunyakovskii inequality, the Hölder inequality, the Hilbert inequality, and the Cauchy inequality. 4) Inequalities for powers of numbers. Here the best known is the Minkowski inequality and its generalizations to the case of series and integrals. 5) Inequalities for certain classes of sequences and functions. Examples are the Chebyshev inequality for monotone sequences and the Jensen inequality for convex functions. 6) Inequalities for determinants. For example, Hadamard's inequality, see Hadamard theorem on determinants. 7) Linear inequalities. One considers a system of inequalities of the form $$ a _ {i1} x _ {1} + \dots + a _ {in} x _ {n} \geq b _ {i} ,\ \ i = 1 \dots n . $$ The set of solutions of this system of inequalities is a certain convex polyhedron in the $ n $- dimensional space $ ( x _ {1} \dots x _ {n} ) $; the task of the theory of linear inequalities (cf. Linear inequality) consists in the study of properties of this polyhedron. Inequalities are of substantial value in all branches of mathematics. In number theory an entire section, Diophantine approximations, is completely based on inequalities; analytic number theory also operates frequently with inequalities (see, for example, Vinogradov estimates). In geometry, inequalities occur in the theory of convex bodies and in the isoperimetric problem (see Isoperimetric inequality; Isoperimetric inequality, classical). In probability theory many laws are stated by means of inequalities (see, for example, Chebyshev inequality in probability theory and its generalization, the Kolmogorov inequality). In the theory of differential equations one uses so-called differential inequalities (cf. Differential inequality). In function theory and approximation theory one uses various inequalities for derivatives of polynomials and trigonometric polynomials (see, for example, Bernstein inequality, Jackson inequality); on inequalities connected with the imbedding of classes of differentiable functions, see Kolmogorov inequality; Imbedding theorems. In functional analysis, in the definition of the norm in a function space, one requires the norm to satisfy the triangle inequality $ \| x + y \| \leq \| x \| + \| y \| $. Many classical inequalities determine in essence the value of the norm of a linear functional or linear operator in one space or another or give estimates for them (see, for example, Bessel inequality; Minkowski inequality). In computational mathematics inequalities are used to estimate errors in the approximate solution of a problem. [1] G.H. Hardy, J.E. Littlewood, G. Pólya, "Inequalities" , Cambridge Univ. Press (1934) [2] E.F. Beckenbach, R. Bellman, "Inequalities" , Springer (1961) A well-known inequality for the absolute value is the triangle inequality: $$ | | a | - | b | | \leq \ | a + b | \leq | a | + | b | , $$ for $ a , b \in \mathbf C $. The inequality stated above in 1) is a generalization of this. The Bunyakovskii inequality is better known as the Cauchy (Cauchy–Schwarz) inequality. [a1] O. Bottema, R. Djordjević, D.S. Mitrinović, P.M. Vasić, "Geometric inequalities" , Noordhoff (1969) [a2] P.S. Bullen, D.S. Mitrinović, P.M. Vasić, "Means and their inequalities" , Reidel (1988) Inequality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Inequality&oldid=47338 This article was adapted from an original article by Material from the article "Inequality" in BSE-3 (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Inequality&oldid=47338"
CommonCrawl
Living Reviews in Solar Physics Wave Modeling of the Solar Wind Leon Ofman First Online: 15 October 2010 The acceleration and heating of the solar wind have been studied for decades using satellite observations and models. However, the exact mechanism that leads to solar wind heating and acceleration is poorly understood. In order to improve the understanding of the physical mechanisms that are involved in these processes a combination of modeling and observational analysis is required. Recent models constrained by satellite observations show that wave heating in the low-frequency (MHD), and high-frequency (ion-cyclotron) range may provide the necessary momentum and heat input to coronal plasma and produce the solar wind. This review is focused on the results of several recent solar modeling studies that include waves explicitly in the MHD and the kinetic regime. The current status of the understanding of the solar wind acceleration and heating by waves is reviewed. Solar wind Waves MHD models Multi-fluid models Hybrid models The solar wind plasma plays a major role in the physical connection between the Sun and the Earth, and affects the plasma conditions in the heliosphere. The solar wind is an important component of space weather, and forms the background upon which solar disturbances, such as CMEs and energetic particles propagate towards the Earth. However, the exact physics of formation of the solar wind is poorly understood. Remote sensing and in-situ observations indicate that there are two types of solar wind: slow and fast. The fast solar wind, reaching the speed of ~ 800 km s−1 at 1 AU, is steady and of low density (few particles per cm3 at 1 AU); the slow solar wind reaches half the above speed with an order of magnitude higher density (see Figure 1). At solar minimum the solar wind speed is a clear function of latitude as measured by Ulysses' SWOOPS instrument with the fast wind emerging from polar coronal holes, and the slow wind confined to the equatorial regions. Near the solar maximum the corona is dominated by streamers and by slow solar wind. The heavy ion composition of the solar wind is correlated with the fast and slow wind, indicating their coronal origin. The solar wind speed as a function of latitude (in km s−1) measured by Ulysses' SWOOPS instrument near solar minimum (left panel) and near solar maximum (right panel). The direction of the magnetic field is marked (red-outward; blue-inward). The typical composite solar images near minimum (8/17/96) and maximum (12/07/00) are shown using data from SOHO/LASCO, EIT, and Mauna Loa K-coronameter images (McComas et al., 2003). The twin Helios spacecraft investigated the heliosphere in the region 0.3 to 1 AU and found evidence for magnetic fluctuations spectrum, and investigated the radial dependence of plasma parameters and the velocity distributions in the solar wind. These measurements provide the basis of many theoretical studies of solar wind acceleration and heating (see the review by Marsch, 2006). Recent in-situ observations by ACE (Stone et al., 1998), Wind (e.g., Lepping et al., 1995), Ulysses, (e.g., Balogh et al., 1992), and other spacecrafts confirm that the solar wind contains magnetic fluctuations that have power law dependence with frequency (see Figure 2 adapted from Smith et al., 2006). The magnetic fluctuations exhibit high correlation with velocity fluctuations indicating their Alfvénic nature. Remote sensing observations show that the solar wind is heated and accelerated close to the Sun within 10 R⊙ (SOHO/UVCS; Kohl et al., 1997). The observed kinetic and compositional properties provide clues on coronal origin (i.e., coronal holes, active regions) (e.g., Ko et al., 2008), and on the acceleration and heating mechanism. However, the interpretation of observations requires theoretical and computational modeling of the global and kinetic properties of the solar wind to understand the physics and the dynamics of the multi-ion solar wind plasma acceleration and heating, and improve the accuracy of space weather forecasting. Example of magnetic spectrum from the ACE database. Year/day of sample (decimal day of year in 1998) and Power Spectral Density (PSD) are shown. The top curve represents the summed power in the two components perpendicular to the mean magnetic field, and the lower curve represents the power in the parallel component. The fit functions with power exponent −1.56 ± 0.3 (upper) and −1.8 ± 0.03 (lower) are shown (see Smith et al., 2006, for further details). In the region between 1 R⊙ and 1 AU the corona and the heliosphere span many orders of magnitudes in relevant temporal-spatial scales, density, magnetic field strength, and in other physical parameters. As a result, the physical processes in the plasma and the modeling approaches are dramatically different in the collisional lower corona with densities on the order of 108–9 cm−3, magnetic field strength of 1 to 100 G, and the heliospheric plasma near 1 AU with densities of few particles per cm−3 and magnetic field strength of few nano-tesla (few tens of micro-Gauss). Correspondingly, the plasma frequency, collision frequency, and the proton gyrofrequency vary over many orders of magnitude between 1 R⊙ and 1 AU. This disparity of scales and physical regimes leads to difficulty in the physical modeling of the heliospheric plasma with a single 'all-inclusive' model. For example, solving the Boltzmann equations in three spatial dimensions and six degrees of freedom is not practical with current or foreseeable future computational resources. A practical modeling approach that is applicable on large scales (usually, MHD) can not resolve the physics of the small scale (kinetic) phenomena. The global models provide little or no information on the kinetics of the heating and acceleration processes. The solution of this difficulty is a multi-level modeling approach, where global models are used for large scale structures, with small scales parameterized as external (to MHD) diffusion coefficients and heating/loss functions. Kinetic models are used to study the physics of the local heating and dissipation phenomena on small spatial-temporal scales. The output of the kinetic models can serve as a guide of the diffusion processes that need to be included in global models. As a result, there are two main types of solar wind models: (1) observationally driven global MHD models that provide the overall global structure of the solar wind as shaped by the interaction with the solar magnetic field, rotation, gravity, and heating; (2) local models that deal with the physical processes of heating and acceleration of the solar wind magnetized plasma with the initial state and boundary conditions not necessarily tied to a particular observation. The output of these models provides the solar wind speed as a function of position, density, temperature, and other parameters. Usually, the acceleration of the solar wind is modeled by introducing simplifying assumption into the energy equation, such as taking the plasma to be isothermal at coronal temperature, or assuming polytropic index below the adiabatic γ = 5/3, that can also vary with heliocentric distance (Cohen et al., 2007). The first isothermal and polytropic solar wind expansion models were developed by Parker (1958, 1963) in one spatial dimension. Present global models provide 3D structure of the solar wind and try to match empirically the observations at 1 AU by adjusting the model parameters (e.g., Mikić et al., 1999; Linker et al., 1999; Usmanov et al., 2000; Roussev et al., 2003; Usmanov and Goldstein, 2003; Cohen et al., 2007). Recently, 2D MHD models that use observations to constrain the solar wind heating function and momentum input were developed (Sittler Jr and Guhathakurta, 1999; Vásquez et al., 2003; Guhathakurta et al., 2006; Sittler Jr and Ofman, 2006). The empirical heating function was recently included in 2.5 solar wind MHD models (Sittler Jr and Ofman, 2006; Airapetian et al., 2011). On the other end of the scale are the kinetic models which provide the description of the small scale interaction in the solar wind plasma between the waves, the ions, and the background magnetic field. These models usually do not provide the information on the global structure of the solar wind in the heliosphere. However, the kinetic models are well suited for the investigation of the kinetic processes and instabilities involved in heating of solar wind-like plasma. Due to the complexity of the kinetic models, and the necessity to resolve the fine scale on the proton, or even electron Larmor radius scale, it is computationally difficult to include global scale structures. The range in between the above two modeling approaches is occupied by MHD models that include explicit heating and acceleration of the solar wind by MHD waves. Following Osterbrock (1961) study suggesting MHD waves for the heating of the solar chromosphere and corona, the acceleration of the solar wind by Alfvén waves was studied in the past (Barnes, 1969; Alazraki and Couturier, 1971; Belcher, 1971; Belcher and Davis Jr, 1971; Heinemann and Olbert, 1980). Recently, one-dimensional (Cranmer and van Ballegooijen, 2005; Suzuki and Inutsuka, 2005, 2006; Cranmer et al., 2007) and three-dimensional MHD models (e.g., Usmanov et al., 2000; Evans et al., 2009) were developed (see the review by Ofman, 2005). However, due to the requirements of time step and resolution, the waves are not included explicitly in global models. They are modeled by an additional wave-pressure term, and wave-energy equations. Only few models consider the resolved waves explicitly in 2.5D MHD models (Ofman and Davila, 1997, 1998). The next level of plasma approximation of fully resolved wave driven wind is via multi-fluid models, that describe each particle species as separate fluid (Ofman and Davila, 2001; Ofman, 2004a). The fluids interact through momentum and energy exchanges, and through electromagnetic interactions resulting from quasi-neutrality condition, and through their contribution to the net current. These models can be tested directly by comparing to observations that contain heavy ion emission (e.g., Ofman, 2004b; Abbo et al., 2010). Observations with SOHO Ultraviolet Coronagraph Spectrometer (UVCS) show that heavy ions such as O5+, and Mg9+ undergo preferential perpendicular heating, causing large temperature anisotropy (T⊥/T∥<10), are hotter and flow faster in coronal holes than protons (Kohl et al., 1997, 1998; Li et al., 1998; Cranmer et al., 1999). Enhanced perpendicular heating of ions compared to protons has also been observed in streamers (Strachan et al., 2002; Uzzo et al., 2007). The magnitude of this effect is significant, but smaller in streamers than in coronal holes. Ulysses and Helios in-situ measurements in the heliosphere have shown that minor ions flow faster than protons by the local Alfvén speed, and are preferentially heated as well, and proton distributions often appear double-peaked with an average relative drift parallel to the background magnetic field (e.g., Marsch et al., 1982b; Feldman et al., 1996; Neugebauer et al., 1996). The temperature anisotropy of protons deduced from remote sensing and in-situ observations of fast solar wind streams provides indirect evidence for the presence of the ion-cyclotron waves in coronal plasma, since the anisotropy can be produced by the resonant absorption of the ioncyclotron waves. Purely adiabatic expansion of the solar plasma is expected to result in an opposite effect: T⊥<T∥ due to the conservations of magnetic moment of the expanding ions in the decreasing radial magnetic field (e.g., Marsch, 2006). However, T⊥/T∥>1 is observed in the heliosphere (e.g., Marsch et al., 1982b; Gazis and Lazarus, 1982). In the past, several theories of ion-cyclotron resonance have been developed and applied to the heating of the solar corona and the solar wind (e.g., Axford and McKenzie, 1992; Marsch, 1992; Tu and Marsch, 1997; Li et al., 1999; Hollweg, 2000; Hu et al., 2000; Cranmer, 2000; Hollweg and Isenberg, 2002). However, there are theoretical difficulties with the application of the ion-cyclotron mechanism for coronal heating, and its role is not yet fully understood (e.g., Cranmer, 2000; Isenberg, 2004). Most such theories may be classified as either fluid-like or quasi-linear kinetic models. The limitation of the fluid or quasi-linear kinetic models is the assumption of fixed-shape ion velocity distribution and quasi-linear limits (i.e., small magnetic fluctuation amplitude allowing simplified description of wave-particle interactions). In the hybrid models the electrons are treated as a fluid, and the ions are treated fully kinetically as particles. Hybrid simulations (see Section 2.3 below) allow relaxing many approximations used in the fluid, multi-fluid, and in linear or quasi-linear kinetic theory. The model is nonlinear, and can describe both the brief initial linear evolution of the plasma, as well as the nonlinear saturated state. Recently, new kinetic models of heating and acceleration of solar coronal plasma in inhomogeneous magnetic field by Alfvén waves were developed (Galinsky and Shevchenko, 2013a,b). The generation of the solar wind by parallel (Isenberg, 2012; Mecheri, 2013) and oblique (Chandran et al., 2010; Isenberg and Vasquez, 2011) ion-cyclotron waves were also studied. The possible role of kinetic Alfvén waves (KAW) (e.g., Voitenko and Goossens, 2006; Dwivedi et al., 2012) and Alfvén waves turbulence (Chandran, 2010; Chandran et al., 2011; Li et al., 2011; Cranmer and van Ballegooijen, 2012) on the acceleration and heating of the solar wind was recently considered. Recently, Cranmer and van Ballegooijen (2005) solved the linearized wave equation for Alfvén waves in the heliosphere. The dependence on heliocentric distance of the frequency-integrated Alfvénic velocity amplitude obtained from the solution of the linearized Alfvén wave equation driven by a spectrum of transverse photospheric fluctuations with an amplitude of 3 km s−1 at the photosphere is compared to Alfvén wave amplitude inferred from spectroscopic observations (SOHO/SUMER and UVCS), IPS measurements, and in-situ data in Figure 3. The study shows that there is generally good qualitative agreement between observations and theoretical prediction of the Alfvén wave evolution in the heliosphere, even for a linearized model. Height dependence of the frequency-integrated velocity amplitude obtained from the solution of the linearized Alfvén wave equation driven by a spectrum of transverse photospheric fluctuations. The solution was obtained in the thin flux tube approximation by Cranmer and van Ballegooijen (2005). Solid lines give the undamped value of 〈δV〉 (the dashed line is for different model parameters). The red line is \(\left\langle {\delta V} \right\rangle _B = \left\langle {\delta B} \right\rangle /\sqrt {4\pi \rho } \) for the 3 km s−1 driving amplitude case. The symbols correspond to observational constraints of the Alfvén waves amplitudes from various observations discussed by Cranmer and van Ballegooijen (2005) (reproduced by permission of the AAS). The turbulence in the solar wind magnetized plasma has been studied for decades in the past (see the review by Velli, 2003), and recently (Verdini et al., 2009; Chandran and Hollweg, 2009; Chandran et al., 2009; Verdini et al., 2010; Markovskii et al., 2010) as the possible state that leads to cascade of energy from the observed large scale fluctuations and waves to small scale structures, down to dissipation scales that can heat the solar wind plasma. Observations of Alfvénic fluctuations in the solar wind by Helios and Ulysses spacecraft show that the turbulent energy carried by these fluctuations is distributed in frequency according to a power law, at high frequencies going as f−5/3, a Kolmogorov spectrum, while at lower frequencies the spectrum flattens to f−1 (where f is the fluctuation frequency) (Goldstein et al., 1995). Alfvénic turbulence is predominant in fast wind streams while in slow solar wind the turbulence is of more complex nature with low Alfvénicity (see the reviews by Tu and Marsch, 1995; Bruno and Carbone, 2013). Recent observations by ACE spacecraft of the solar wind protons at 1 AU indicate that the turbulent cascade rate agrees better with Kraichnan (f−3/2), rather than with Kolmogorov (f−5/3) rate (Vasquez et al., 2007). Similar results were seen by the Wind spacecraft (Podesta et al., 2006). Part of the fluctuating power at low frequencies can be attributed to propagating structures in the solar wind. However, there is strong evidence that the fluctuations are Alfvénic at frequencies of milli-Hertz and higher. Recent observations by Hinode satellite show that Alfvénic fluctuations are the likely energy source that drives the solar wind (e.g., De Pontieu et al., 2007; Ofman and Wang, 2008; Hahn et al., 2012; Hahn and Savin, 2013). A review of observational evidence for propagating MHD waves in coronal holes that may accelerate the solar wind is found in Banerjee et al. (2011). Recently launched NASA's Solar Dynamics Observatory (SDO) provides unparalleled opportunity to look for the solar coronal wave spectrum over the entire disk of the Sun at high temporal and spatial resolution. The analysis of the data from the Atmospheric Imaging Assembly (AIA) onboard SDO will likely provide the constraint on the input wave spectrum that drives the solar wind. Although the observed spectrum is limited to the MHD frequency range, since the temporal resolution does not allow resolving high frequencies down to the gyroresonant scale (∼ kHz), the form of the spectrum is likely to provide clues on the relevant turbulent cascade processes. The following level of solar wind plasma approximation is via hybrid models (see Section 2.3), that describe protons and other ions kinetically as particles, and electrons as neutralizing background fluid (Winske and Omidi, 1993). Hybrid simulations can represent more completely (than fluid model) and self-consistently the wave-particle interactions in the multi-ion solar wind magnetized plasma. The models can be used to describe the kinetic processes involved in heating by a spectrum of waves, and can be used to study the nonlinear and resonant interactions of the turbulent spectrum with the ions. Such numerical methods have the potential to model the wave-particle interactions, and the corresponding velocity distribution and magnetic fluctuations in the nonlinear saturated state that can be compared to in-situ observations. Recently, one dimensional hybrid simulations of multi-ion solar wind plasma were used to study the heating by wave spectrum, beams, and the stability of solar wind multi-ion plasma (Liewer et al., 2001; Ofman et al., 2001, 2002; Xie et al., 2004; Lu and Wang, 2005; Li and Habbal, 2005; Hellinger et al., 2005; Ofman et al., 2005). Due to the local nature of the hybrid models, they require special treatment taking into account the global properties of the solar wind, such as expansion of the solar wind into the heliosphere (Liewer et al., 2001; Hellinger et al., 2005; Ofman et al., 2011). We will review some of the recent results of hybrid simulation models of solar wind plasma heating. Two-dimensional hybrid models of homogeneous multi-ion plasma heating were also studied recently (Gary et al., 2001, 2003; Kaghashvili et al., 2003; Hellinger and Trávníček, 2006; Gary et al., 2006). However, only few studies considered the effect of time dependent wave fluctuations on solar wind plasma heating with 2D hybrid models (Ofman and Viñas, 2007; Ofman, 2010; Markovskii et al., 2010). It was found that the input wave spectrum can heat the ions by resonant interaction, as well as through non-resonant parametric decay instability of Alfvén waves (e.g., Araneda et al., 2007). It was also found that the presence of small scale inhomogeneity in the background plasma can enhance the heating by the high frequency Alfvén wave spectrum (Ofman, 2010). This review is focused on selected wave-acceleration models (both, MHD and kinetic) of the solar wind. Other reviews of solar wind models were recently published (Hansteen and Velli, 2012; Cranmer, 2012). 2 Model Equations Below we provide the typical basic equations used in the three classes of models reviewed here: MHD, multi-fluid, and hybrid. 2.1 Single fluid MHD The single-fluid, normalized visco-resistive MHD equations with gravity are (see, e.g., Priest, 1982; Ofman, 2005) $$\frac{{\partial \rho }} {{\partial t}} + \nabla \cdot (\rho v) = 0, $$ $$\frac{{\partial v}} {{\partial t}} + (v \cdot \nabla )v = - \frac{\beta } {{2\rho }}\nabla p - \frac{1} {{F_r r^2 }} + \frac{{J \times B}} {\rho } + F_v ,$$ $$\frac{{\partial B}} {{\partial t}} = \nabla \times (v \times B) + S^{ - 1} \nabla ^2 B,$$ $$\left( {\frac{\partial } {{\partial t}} + v \cdot \nabla } \right)\frac{p} {{p^\gamma }} = (\gamma - 1)(S_h + S_l ), $$ where ρ is the fluid density, v is the fluid velocity, B is the magnetic field, \(J = \nabla \times B\) is the current density, and p is the plasma pressure. The normalization of the magnetic field is by the typical magnetic field B0 at the base of the corona, distances (r) are normalization by the solar radius R⊙, the density normalization is by the typical density ρ0 at the coronal base, the velocity normalization is by Alfvén speed \(V_A = B_0 /(4\pi \rho _0 )^{1/2} \), and the time normalization is by the Alfvén time defined as \(\tau _A = L/V_A \), where L is the typical lengthscale of the problem (for convenience, L = R⊙ is used). The pressure is normalized by p0 = k B n0T0, where k B is the Boltzmann constant, \(n_0 = \rho /(\mu m_p )\) is the typical number density, where m p is the proton mass, and μ is the average mass number density of the coronal plasma, and T0 is the typical temperature at the base of the corona. The Froude number is \(F_r = V_A^2 R_ \odot /(GM_ \odot )\), G is the gravitational constant, R⊙ is the solar radius, M⊙ is the solar mass, \(\beta = 2c_s^2 /(\gamma V_A^2 )\) is the ratio of thermal to magnetic pressure, where \(c_s = (\gamma p/\rho )^{1/2} \) is the sound speed. The Lundquist number \(S = \tau _A /\tau _r \) is the ratio of the typical Alfvén time to the resistive diffusion time (\(\tau _r = 4\pi L^2 /\nu c^2 \), where v is the resistivity, and c is the speed of light), and F v is the viscous force term (see Braginskii, 1965). The heating and loss terms are S h , and S l , respectively, where S h is the coronal heating function (assumed, or obtained empirically), and S l represents the losses due to thermal conduction and radiation (for example, Landi and Landini, 1999; Colgan et al., 2008, for optically thin plasma). The polytropic index γ = 1 in isothermal plasma, γ = 1.05 in a commonly used polytropic model of the solar wind, and γ = 5/3 for solar wind models with explicit heating terms. Variable γ was used by Cohen et al. (2007) to match solar wind properties at 1 AU. The above set of equations is supplemented by the equation of state p = nk B T, and by the solenoidality condition \(\nabla \cdot B = 0\). The visco-resistive single fluid MHD model was used recently to reproduce the global emission properties of the solar corona (Lionello et al., 2009; Downs et al., 2010). 2.2 Multi-fluid models Here, we describe the multi-fluid equations and model utilized by Ofman (2004a) to model the fast solar wind in coronal holes accelerated by nonlinear MHD waves. Neglecting electron inertia (m e ≪m p ), relativistic effects (V≪c), assuming quasi-neutrality \((n_e = n_p + Zn_i )\), where Z is the charge number, the normalized three-fluid MHD equations can be written as $$\frac{{\partial n_k }} {{\partial t}} + \nabla \cdot (n_k V_k ) = 0, $$ $$n_k \left[ {\frac{{\partial V_k }} {{\partial t}} + (V_k \cdot \nabla )V_k } \right] = - E_{uk} \nabla p_k - E_{ue} \frac{{Z_k n_k }} {{A_k n_e }}\nabla p_e - \frac{{n_k }} {{F_r r^2 }}e_r + \Omega _k n_k (V_k - V_e ) \times B + F_v + n_k F_{k,coul,} $$ $$\frac{{\partial B}} {{\partial t}} = \nabla \times (V_e \times B) - \frac{1} {S}\nabla \times \nabla \times B, $$ $$\frac{{\partial T_k }} {{\partial t}} = - (\gamma _k - 1)T_k \nabla \cdot V_k - V_k \cdot \nabla T_k + C_{kjl} + (\gamma _k - 1)(H_k /n_k + S_k ), $$ where the index k = p, i (in Equation (9) k = e, p, i), H k is the heat conduction term, S k is the heating term of each fluid, C kjl is the energy coupling term between the various fluids (Li et al., 1997; Ofman, 2004a), \(F_v = \nabla \cdot \Pi \) is the viscous force term due to ions, where Π the viscous stress tensor and the Coulomb friction terms Fk,coul given by Braginskii (1965), γ k = 5/3 is the polytropic index of each species, A k is the mass number, and \(\Omega _k = \frac{{Z_k eB_0 }} {{A_k m_p c}}\tau _A \) is the normalized gyrofrequency. The three-fluid equations are normalized by r→r/R⊙, where R⊙ is the solar radius; t→t/τ A ; V→V/V A ; B→B/B0; n k →n k /ne0; T k →T k /Tk0. The following parameters enter in the above equations: S the Lundquist number; \(E_{ui} = (k_B T_{i,0} /m_i )/V_A^2 \) the electron and proton Euler number; \(E_{ui} = (k_B T_{i,0} /m_i )/V_A^2 \) the ion Euler number; F r the Froude number; A k is the atomic mass of species k; \(b = cB_0 (4\pi en_{e0} R_ \odot V_A )\), Tk,0 is the normalization temperature, m k is the mass of the particles, B0 is the normalization magnetic field. The heat conduction term in Equation (9) is normalized by \(H_k \to H_k (k_B V_A R_ \odot /T_{k0}^{2.5} )\). In Ofman (2004a) model the fast solar wind is produced by a broad band spectrum of waves. The linearly polarized Alfvén waves are driven at the base of the corona as follows: $$B_\varphi (t,\theta ,r = 1) = - V_d + V_{A,r} F(t,\theta )$$ $$F(t,\theta ) = \sum\limits_{i = 1}^N {a_i \sin (\omega _i t + \Gamma _i (\theta ))} $$ where a i = ip/2, p = -1 for the f−1 spectrum, the discrete frequencies are given by \(\omega _i = \omega _1 + (i - 1)\Delta \omega \), and the range is defined by \(\Delta \omega = (\omega _N - \omega _1 )/(N - 1)\), where N is the number of modes, Г i (θ) is the random phase that depends on the solar latitude θ. Typically, the frequencies are in the mHz range, the driving amplitude V d is few percent of V A , with an order of 100 modes used to model the desired spectrum (see Figure 4). As described by Ofman (2004a) dissipation of the waves occurs through viscous and resistive terms in the momentum and inductance equations, respectively. The dissipation coefficients used in that model are hyper-viscosity and hyper-resistivity, i.e., their values are much larger than the classical resistivity and viscosity, accounting empirically for kinetic and turbulent effects. The typical form of the driving spectrum of Alfvén waves used in the 3-fluid model to drive the solar wind. In addition to the waves, an empirical heating function is introduced in this model to heat the ions: where S0,k≡s0,kn k is the amplitude of the heat input in normalized units, and λ k is the length scale of the heating in R⊙. This is necessary, since the Alfvén wave spectrum constrained by available observations can not account for the observed (e.g., Kohl et al., 1997) preferential acceleration and heating of heavy ions. The asymptotic solar wind parameters (speed of various ion species, mass flux, temperature, etc.), can be matched by fitting the parameters of the heating function, combined with the parameters of the driving Alfvén wave spectrum. 2.3 Hybrid models In the hybrid model the ions are represented as particles, neglecting collisions, while the electrons are described as a finite temperature massless fluid to maintain quasineutrality of the plasma. This method allows one to resolve the ion dynamics and to integrate the equations over many ion-cyclotron periods, while neglecting the small temporal- and spatial- scales of the electron kinetic motions. In the hybrid model, the three components of order million particle velocities are used to calculate the currents, and the fields in the 1D, 2D, or, in some cases, 3D grid. Note that each numerical particle represents large number of real particles, determined by the density normalization. The required number of particles per cell is determined by the required limitation on the overall statistical noise, and could be increased by an order of magnitude as needed. The following equations of motion are solved for each particle of species (k): $$ \frac{{dx_k }} {{dt}} = Vk $$ $$m_k \frac{{dv_k }} {{dt}} = Ze\left( {E + \frac{{v_k \times B}} {c}} \right), $$ where m k is the particle mass, Z is the charge number, e is the electron charge, and c is the speed of light. The electron momentum equation is solved by neglecting the electron inertia $$\frac{\partial } {{\partial t}}n_e m_e v_e = 0 = - en_e \left( {E + \frac{{v_e \times B}} {c}} \right) - \nabla p_e , $$ where p e = k B n e T e is used for closure, and quasi-neutrality implies \(n_e = n_p + Zn_i \), where n k is the number density of electron, protons, and ions, respectively. The above equations are supplemented with Maxwell's equations $$\nabla \times B = \frac{{4\pi }} {c}J, $$ where the displacement current is neglected in non-relativistic plasma, and $$\nabla \times E = - \frac{1} {c}\frac{{\partial B}} {{\partial t}}. $$ The field solutions are obtained on the 1D, 2D, or 3D grid, and the proton and ion equations of motions are solved as the particle motions respond to the fields at each time step. The method has been tested and used successfully in many studies. In Ofman and Viñas (2007) 2D study 128 × 128 grid with 100 particles/cell/species were used. The particle and field equations were integrated in time using a rational Runge-Kutta (RRK) method (Wambecq, 1978) whereas the spatial derivatives were calculated by pseudospectral FFT method. When non-periodic boundary conditions are applied finite difference method is used for the field solver. The hybrid model allows computing the self-consistent evolution of the velocity distribution of the ions that includes the nonlinear effects of wave-particle interactions without additional assumptions. Moreover, the hybrid model is well suited to describe the nonlinear saturated state of the plasma. Since the hybrid models usually describe a region of several hundred ion inertial length (l i = c/ω pi ) across, the method is limited to model local small scale structures in the corona. For a typical solar plasma density of 104 cm−3 at 10 R⊙ and a simulation box with a side of 440l i we get about 1000 km for the extent of the simulated region in each dimension. At 1 AU the plasma density is much lower and the modeled region covers about 45000 km in each dimension. A way to overcome the computational limitation to small scales is to use an 'expanding box' model (e.g., Grappin and Velli, 1996; Liewer et al., 2001; Hellinger et al., 2005). This approach employs transformation of variables to the moving solar wind frame that expands together with the size of the parcel of plasma as it propagates outward from the Sun. In particular, it is assumed that a small packet of plasma of length δr ≪ R0 and width a(t), where a(t)/R0 ≪ 1 expands in the lateral direction only as it moves away from the Sun at constant speed U0. The initial distance R0 is O(R⊙). Thus, the modeled regions position is \(R(t) = R_ \odot + U_0 t\), and the width a(t) = R(t)/R0. Using these transformations the coordinates are transformed as \(x' = x - R(t)\), \(y' = y/a(t)\), and \(z' = z/a(t)\), and the equations of motions together with the field equations are transformed to the moving and expanding frame. Although, the method requires several severe simplifying assumptions (i.e., lateral expansion only, constant solar wind speed) and approximations (the original spherical coordinates and the mean magnetic field transformed to new coordinates using second order expansion (see Liewer et al., 2001) to remain tractable, it provides qualitatively good description of the solar wind expansions, thus connecting the disparate scales of the plasma in the various parts of the heliosphere. 3 Selected Model Results In this section a brief overview of solar wind model results is given. Here, we concentrate on the results of wave driven 2.5D MHD, 2.5D multi-fluid models, as well as 1D and 2D hybrid models. In the reviewed models the waves are included explicitly, fully resolved, and their damping or resonant absorption was calculated explicitly. This allows more accurate description of the physics and interaction between the waves and the solar wind plasma than in WKB models, or models that parameterize the propagation and dissipation of the waves. The results of a global 3D MHD solar wind model computed with SWMF that incorporates the effects of Alfvén wave heating and acceleration in the WKB approximation is shown in Figure 5. The figure shows a cut of the 3D MHD model results in the meridional plane for an idealized tilted-dipole magnetic field configuration. The formation of the bi-model solar wind with slow wind in the streamer belt and fast at higher latitudes is evident in the radial outflow velocity values. The radial velocity in the meridional plane for a two-temperature, idealize tilted-dipole simulation with 15° tilt with respect to the rotation axis. The black curves indicate the location of the Alfvénic surface. The red regions show the location of the fast solar wind, and the blue-green show sources of the slow solar wind. Image reproduced by permission from Oran et al. (2013); copyright by AAS. Wave models in the single fluid MHD are limited to frequencies much smaller than the proton gyrofrequency and correspondingly to wavelengths that are much larger than the proton gyroradius. The acceleration of the solar wind plasma by the waves due to momentum transfer (wave reflection and gradient of the wave pressure is modeled. The dissipation of the waves by Ohmic and viscous dissipation terms is included. The multi-fluid models allow including waves with frequencies in the MHD and in the range of proton and ion gyrosresonant frequency. The multi-ion cyclotron resonant dispersion of waves is reproduced by this model even in the linear regime. It has been shown that multi-fluid dispersion relation is equivalent to Vlasov's dispersion for cold plasma (for example, see Ofman et al., 2005). In addition, the multi-fluid models can include separate heating and dissipation processes for electrons, protons, and each ion species with different dissipation coefficients. The fluids are coupled through collisional energy exchange terms and Coulomb friction, and through electromagnetic interactions. The multi-fluid models provide the next level of plasma approximation between the MHD and the kinetic descriptions. The hybrid simulations extend the modeled physics of the solar wind plasma to even smaller scale in time and space to the kinetic regime, and the wave frequencies in the ion- and proton-gyroresonant scale are resolved. In addition, ion velocity space instabilities and ion kinetic processes are modeled fully. The hybrid models are limited to waves with frequencies below the electron gyroresonant frequency, since the electrons are treated as fluid in these models. Other limitations of these models are outlined in Winske and Omidi (1993). The 1D hybrid models are limited to parallel propagating waves in one spatial direction, or oblique waves with fixed angle of propagation. The more general 2D hybrid models include the description of waves with arbitrary propagation direction and can be used to model inhomogeneous plasma in two spatial dimensions. 3.1 Fast solar wind in coronal holes It is well known that thermally driven Parker's solar wind model with typical coronal temperature of 1–2 MK can produce the slow solar wind asymptotic speed of about 400 km s−1, but can not explain the fast solar wind that is observed to reach 800 km s−1 within 10 R⊙ and is associated with coronal holes with typical temperatures < 1 MK (Aschwanden, 2004). The common approach is to include an additional source of momentum in the MHD equations, such as Alfvén waves as an empirical WKB momentum addition term (e.g., Usmanov et al., 2000). This approach was recently extended to include the effects of turbulence dissipation in a global wave driven solar wind model, and implemented in the Space Weather Modeling Framework (SWMF) (Tóth et al., 2005) coronal 3D MHD code (Evans et al., 2012; Sokolov et al., 2013; Oran et al., 2013). Two-temperature Alfvén wave driven fast solar wind models were also developed in SWMF (van der Holst et al., 2010). Lau and Siregar (1996) studied the acceleration of the solar wind by resolved nonlinear Alfvén waves in 1.5D MHD model. Ofman and Davila (1997) were the first to use the single fluid 2.5D MHD model to study resolved Alfvén wave driven fast solar wind in a coronal hole. In their model the Alfvén waves were launched at the solar boundary of a coronal hole, and were resolved throughout the coronal hole to 40 R⊙. The acceleration of the solar wind occurs through momentum transfer from the waves to the solar wind plasma. The heating of the solar wind plasma was not included explicitly in this model and an isothermal approximation was used (γ = 1). However, wave dissipation does occur through resistive dissipation with finite value of S. In Figure 6, a snapshot of the spatial dependence of the solutions in terms of Bφ/ρ1/2, vφ, v r , and ρ is shown at t = 255τ A = 32.5 h. The velocities and B φ /ρ1/2 are in units of V A = 1527 km s−1, and the density is in units of 108 cm−3. The monochromatic Alfvén waves launched in this model are evident in V φ and in B φ /ρ1/2. The nonlinear longitudinal waves produced by the gradient of the compressions associated with the Alfvén wave, B φ 2 , are evident in v r and ρ. The large amplitude, long wavelength compressional velocity and density fluctuation propagate in-phase (Ofman and Davila, 1998). Ofman and Davila (1998) found that low-frequency (0.35 mHz) Alfvén waves with amplitude of 46 km s−1 can produce the fast solar wind in coronal holes. The result of a 2.5D MHD Alfvén wave driven fast solar wind model in a coronal hole. A snapshot of the spatial dependence of B φ /ρ1/2, v φ , v r , and ρ is shown at t = 255τ A = 32.5 h. The velocities and B φ /ρ1/2 are in units of V A = 1527 km s−1, and the density is in units of 108 cm−3. The Alfvén waves are evident in V φ and in B φ /ρ1/2. The nonlinear longitudinal waves are evident in v r and ρ that propagate in-phase (Ofman and Davila, 1998). Grappin et al. (2002) were the fist to study resolved Alfvén waves driven wind that include both, closed and open field regions using 2.5D MHD model. They found that onset of Alfvé wave flux in one hemisphere generates a stable global circulation pattern in the closed loops region that can lead to global north-south asymmetry of the solar corona. In Figure 7 a cut through the center of the coronal hole is shown. The V r and V φ solar wind velocity components are shown for two Alfvén wave driving frequencies, and the green curve shows Parker's isothermal solar wind solution. It is evident that the low frequency waves (f = 0.35 mHz) lead to significant acceleration of the fast solar wind above Parker's solution, and produce the fast solar wind far from the Sun. The higher frequency waves provide acceleration close to the Sun below 10 R⊙. Alfvén wave driven fast solar wind obtained with 2.5D MHD model (a cut through the center of the coronal hole is shown). V r and V φ solar wind velocities are shown for two Alfvén wave driving frequencies. The green curve shows the Parker's isothermal solar wind solution (adapted from Ofman and Davila, 1998). The 2.5D model discussed above includes only the coronal part of the solar wind, with the driving Alfvén waves applied at the lower coronal boundary. Recently, Suzuki and Inutsuka (2005) modeled the acceleration of the fast solar wind by Alfvéen waves from the photosphere to 0.3 AU using a 1.5D model (Figure 8). This approach allowed connecting directly photospheric motions of observationally constrained magnitude to solar wind speed at 0.3 AU. Although the model does not include the effects of cross-field gradients, the model demonstrates that sufficient Alfvén wave energy flux reaches the corona to accelerate the solar wind. The 1.5D model results compare favorably to IPS observation (Grall et al., 1996; Canals et al., 2002) and to SOHO observations (see, Suzuki and Inutsuka, 2005, for the details). Results from 1.5D solar wind model (red lines) compared to observations (symbols and symbols with error bars). See, Suzuki and Inutsuka (2005) for the details (reproduced by permission of the AAS). 3.2 Fast solar wind: 2.5D multi-fluid models In Figures 9 and 10 we show the results of the 3-fluid model of the Alfvén wave driven fast solar wind in a coronal hole obtained by Ofman (2004a). In this model a broad band spectrum of Alfvénic fluctuations was applied at the lower coronal hole boundary. The fast solar wind was produced by acceleration and heating with the spectrum of Alfvén waves, that were fully resolved in the model. In Figure 9 the Alfvén waves are evident in V φ and the accelerating solar wind in V r velocity components for He++ ions (left panels) and protons (right panels) at t = 114τ A are shown. Note that compressive fluctuations are also seen in V r due to the local variation of wave pressure gradient. The velocity is in units of 1527 km s−1. The distance R is in units of R⊙, and the latitude θ is in radians. Results of the 3-fluid model of the Alfvén wave driven fast solar wind in a coronal hole. The V φ and the V r velocity components for He++ (left panels) and protons (right panels). The velocity is in units of 1527 km s−1. The distance R is in units of R⊙, and the latitude θ is in radians (Ofman, 2004a). The typical form of the magnetic fluctuations spectrum obtained with the 3-fluid model at 18 R⊙. The solid line shows a fit with ω2, while the dashed curve shows a fit with ω−5/3 (adapted from Ofman, 2004a). The typical form of the magnetic fluctuations obtained with the 3-fluid model at r = 18 R⊙ is shown in Figure 10. It is interesting to note that the f−1 spectrum launched at the base of the coronal hole results in f−2 spectrum at larger distances. The steepening of the magnetic fluctuations spectrum is expected due to turbulence and dissipation that affects shorter wavelengths and correspondingly higher frequencies more than the long wavelength (low frequencies) fluctuations. The power law dependence is close to Kolmogorov's turbulent power spectrum of f−5/3. At frequencies higher than 200τ A −1 the spectrum steepens due to increased dissipation. In Figure 11 the θ-averaged outflow speeds of protons, He++, and O5+ ion fluid are shown for four sets of model parameters. The parameters are (a) H0p = 0.5, H0He++ = 12, V d = 0.034, (b) H0p = 0.5, H0O5+ = 10, V d = 0.034, and (c) H0p = 0.0, H0He++ = 12, V d = 0.05, where H0 is the heating rate per particle for an empirical heating term used in Ofman (2004a), and V d is the amplitude of the Alfvén wave spectrum. The corresponding temperatures and densities are shown in Figure 12. In Figure 11a the solutions of the 3-fluid model with empirical heating term [Equation (12)] in addition to Alfvén wave spectrum are shown. The heating term parameters were chosen to match the observed fast solar wind speed, and the faster outflow of He++ ions compared to protons, observed at 0.3 AU and beyond with Helios and Ulysses spacecraft (Marsch et al., 1982a,b; Feldman et al., 1996; Neugebauer et al., 2001). In Figure 11b the O5+ ions were included as the third fluid, and the heating function parameters for O5+ ions were adjusted to get faster than proton outflow. In Figure 11c the same heating per particle was deposited in protons and He++ ions. Evidently, in this case the He++ ions outflow speed is slower than the proton outflow speed, contrary to observations. In Figure 11d the solar wind protons are accelerated and heated solely by the Alfvén wave spectrum (i.e., H0p = 0). This was achieved by increasing the input wave amplitude, compared to the values used in Figures 11a–c. Note that the temperature structure of protons and O5+ ions as seen in Figure 12b is in qualitative agreement with SOHO/UVCS observations (Kohl et al., 1997; Cranmer et al., 1999; Antonucci et al., 2000) (at present there are no observations of He++ temperature in this region). The model shows that in all cases electron heating can be achieved by thermal coupling between electrons and protons alone (through the C kjl thermal coupling term in Equation (9)). Results of the 3-fluid model: the outflow speed of protons (solid) and ions (dashed) in the coronal hole averaged over γ for the fast solar wind in a coronal hole. (a) With preferential heating of He++ ions. (b) Same as (a), but with preferential heating of O5+ as the heavy ions. (c) Solar wind produced with equal heat input per particle for protons and He++ ions. (d) Wave driven wind — no empirical heating of protons, and electrons (adapted from Ofman, 2004a). The temperatures and densities of the electrons, protons, and ions obtained with 3-fluid model of the fast solar wind for the cases shown in Figure 11 (adapted from Ofman, 2004a). In spectroscopic observations of emission lines the observed finite line width is the result of broadening by Doppler shift due to the motion of the emitting ions in the line of sight. The motions are usually attributed to two components: (1) thermal or kinetic motions due to the finite width of the ion velocity distribution; (2) non-thermal motions, arising from any unresolved macroscopic motions of the plasma in the line of sight. The effect of observationally unresolved Alfvén waves on the apparent emission line widths was modeled with the 3-fluid model by Ofman and Davila (2001) and Ofman (2004a). These models allow separating the contribution of waves to the observed line profiles. In Figure 13 the Doppler-broadened emission line, resulting from combined thermal and non-thermal motions calculated with the 3-fluid model is shown. The solid line shows the simulated emission line profile of protons at 4 R⊙ at a temperature of 3.5 MK, and the dashes show the emission line broadened by unresolved Alfvénic fluctuation. In Figure 14 the effective temperature calculated from the kinetic temperature and the Alfvénic wave motion contribution is shown for the wave-driven fast solar wind as a function of the heliocentric distance. The effective proton temperature is affected significantly by the non-thermal component, while the relative effect on He++ is smaller close to the Sun. By comparing the results of the 3-fluid model to observations, it is possible to evaluate better the thermal and non-thermal motions for protons and heavier ions in the observational data. Doppler broadening of an emission line as a result of unresolved Alfvén wave motions in the line of sight obtained with the 3-fluid model. Thermal (solid line) and simulated (dashed line) line profile at 4 R⊙. The integration time is 1.7 h (adapted from Ofman and Davila, 2001). The effective temperature and the kinetic temperature for protons (solid) and ions (dashes) for wave driven fast solar wind. The effective temperature that includes the contribution of unresolved Alfvénic fluctuations is shown by thick line style, while the kinetic temperature is shown by thin line style (adapted from Ofman, 2004a). 3.3 1D hybrid models Recently, Ofman et al. (2002) used 1D hybrid model of initially homogeneous, collisionless plasmas to study the heating of solar wind plasma by a spectrum of ion-cyclotron waves. Motivated by observations the model was driven by circularly polarized Alfvénic fluctuations of the form f−1 and f−5/3 for a limited bandwidth. They found that the ion heating depends on the resonant power in the frequency range of the input spectrum. Preferential heating of minor ions, such as O5+, over protons was demonstrated in this model. In Figure 15 the evolution of the temperature anisotropy for protons and O5+ ions is shown. It is evident that after ∼ 600Ω p −1 the perpendicular heating of the ions saturates at anisotropy level of ∼ 7, and the proton are not heated significantly. The level of saturated anisotropy is determined by the temperature dependent nonlinear balance between ion-cyclotron unstable ion velocity distribution that releases electromagnetic ion-cyclotron waves and the resonant absorption of magnetic fluctuations together with parallel heating of the ions. After inspecting the perpendicular and parallel temperatures of O5+ at the end of the run it is evident that the heating was predominantly in the perpendicular direction (Ofman et al., 2002). The temporal evolution of the O5+ ion (top panel) and proton (lower panel) temperature anisotropy obtained with 1D hybrid model for the driven wave spectrum case (adapted from Ofman et al., 2002). In Figure 16 the velocity distribution of the protons and O5+ ions is shown at the end of the run. It is evident that the proton velocity distribution is isotropic and the O5+ ions are hotter in the perpendicular direction than in the parallel direction. The O5+ velocity distribution is close to bi-Maxwellian with small non-Maxwellian features in the parallel velocity distribution, likely produced by the small parallel heating due to nonlinear compressive modes driven by the Alfvénic fluctuations spectrum. The velocity distribution of O5+ ions (left panel) and protons (right panel) obtained with 1D hybrid model of the driven wave spectrum. The V x is parallel to the background magnetic field shown with the solid curve, the transverse components V y (dashes), and V z (dots) are shown (adapted from Ofman et al., 2002). The relaxation of O5+ ion temperature anisotropy due to ion-cyclotron instability for the parameter range relevant to fast solar wind in coronal holes was recently studied using 1D hybrid model (Ofman et al., 2001) (see Figure 23). The study was motivated by SOHO/UVCS observations indicating large temperature anisotropy of O5+ ions (Kohl et al., 1997; Cranmer et al., 1999). It was found that the scaling of the relaxed \(T_{ \bot i} /T_{\parallel i} - 1\) with the final β∥i (full circles) and the scaling of the relaxation time, trel, with the initial β∥i (circles) agree well with the theoretical scaling law β ∥i −0.41 (Gary, 1993). The "x"'s mark the values \(T_{ \bot i} /T_{\parallel i} - 1\) at t = 0. The enhanced O5+ abundance relative to protons of 6 × 10−4 in this model was implemented in oder to shorten the computation times. Similar result was found in 2D hybrid model by Gary et al. (2003) (see Section 3.4 and Figure 22 below). The results of the parametric study with the 1D hybrid simulation of O5+ temperature anisotropy relaxation by Ofman et al. (2001). The scaling of the relaxed \(T_{ \bot i} /T_{\parallel i} - 1\) with the final β∥i (full circles). The scaling of the relaxation time, trel, with the initial β∥i (circles). Both quantities scale as β ∥i −0.41 . The "x"'s mark the values \(T_{ \bot i} /T_{\parallel i} - 1\) at t = 0. The enhanced O5+ abundance of 6 × 10−4 in this parametric study leads to shorter computation times (Ofman et al., 2001). Results of the parametric study of He++ anisotropy relaxation obtained with 2D hybrid code by Gary et al. (2003). The parameters were nα/ne = 0.05 with initial \(T_e /T_{\parallel p} = 1.0\), \(T_{ \bot \alpha } /T_{\parallel p} = 4.0\), and isotropic protons. The crosses correspond to t = 0, the squares indicate plasma parameters at saturation of the fluctuating magnetic fields, and the dots represent later times. The dashed line indicates the best fit of the anisotropies at ‖ p t = 400 (adapted from Gary et al., 2003). Recently, Ofman et al. (2005) investigated the effects of high-frequency (of order ion gyrofrequency) Alfvén and ion-cyclotron waves on ion emission lines by studying the dispersion of these waves in a multi-ion coronal plasma. The dispersion relation of parallel propagating Alfvén cyclotron waves in the multi-ions coronal plasma was determined using 1D hybrid model (see Figure 17) and compared with multi-fluid and Vlasov dispersion relation. It was found that the three methods are in good qualitative agreement in the weakly damped regime (kC A /Ω p < 1). The ratio of the ion to proton fluid velocities perpendicular to the direction of the magnetic field was calculated for each wave modes for typical coronal parameters (see Figure 18). It was found that the O6+ perpendicular fluid velocity exhibits strong (factor of 20–100) enhancement and He++ perpendicular velocity is enhanced by a factor of 3.5–5 compared with proton perpendicular fluid velocity, in qualitative agreement with SOHO/UVCS observations of large perpendicular velocity of heavy ions in coronal holes (e.g., Kohl et al., 1997; Cranmer et al., 1999). The study demonstrated how the results of hybrid models can be used to better understand the observations of coronal ion emission. The dispersion relations obtained from 1D hybrid model in three-ion plasma (p, He++, O6+). The intensity scale shows the power of the Fourier transform of (a) transverse magnetic field fluctuations, and transverse fluid velocities of (b) protons, (c) He++, and (d) O6+ (Ofman et al., 2005). Velocity amplitude ratios of \(V_{He^{ + + } } /V_p \) (top panel) and \(V_{O^{6 + } } /V_p \) obtained from 1D hybrid simulation dispersion relation. The ratio \(V_{He^{ + + } } /V_p \) is shown in the top panel for \(kC_A /\Omega _p \approx 0\) (solid line), and for \(kC_A /\Omega _p \approx 0.52\) (dashes). Bottom panel: same as top panel, but for the ratio \(V_{O^{6 + } } /V_p \) (Ofman et al., 2005). Recently, Araneda et al. (2007, 2008) used Vlasov theory and one-dimensional hybrid simulations to study the effects of compressible fluctuations driven by parametric instabilities of Alfvén-cyclotron waves. They found that field-aligned proton beams are generated during the saturation phase of the wave-particle interaction, with a drift speed somewhat above the Alfvén speed. This finding agrees with typically observed velocity distributions of protons in the solar wind that contain a thermal anisotropic core and a beam component (see the review by Marsch, 2006). The expanding box model (Grappin and Velli, 1996; Liewer et al., 2001) was recently applied in 1.5D hybrid models of H+-He++ solar wind plasma heated by a spectrum of turbulent Alfvénic fluctuations and in solar wind plasma with super-Alfvénic ion relative drift (Ofman et al., 2011; Maneva et al., 2013). In particular, Maneva et al. (2013) studied the turbulent heating and acceleration of He++ ions by an initial self-consistent spectra of Alfvén-cyclotron waves in the expanding solar wind plasma using 1.5D hybrid simulations. They found that the He++ ions are preferentially heated by the broad-band initial spectrum, resulting in much more than mass-proportional temperature increase (see Figure 19). Maneva et al. (2013) also found that the differential acceleration of protons and He++ ions depend on the amplitude and spectral index of the magnetic fluctuation, while solar wind expansion suppresses the differential streaming. They also find that the expansion leads in general to perpendicular cooling for protons and alphas. However, the cooling effect of the expansion is small and the waves provide sufficient heating, maintaining significant temperature anisotropy, in agreement with observations. Inspection of the proton and alpha velocity distribution in the V∥-V⊥ plane shows the formation of non-Maxwellian features due to the effects of the broad band spectrum, such as perpendicular broadening (i.e., temperature anisotropy), as well as formation of a population of accelerated particles by the waves (see Figure 20.) Top: Temporal evolution of the parallel and perpendicular components of the ion temperatures obtained by Maneva et al. (2013) with the 1.5D hybrid model involving broadband spectra. Solid lines denote the evolution without expansion, and the dashed lines illustrate the case when solar wind expansion is considered. Bottom: Temporal evolution of the H+-He++ drift speed for this case. The dashed line shows the result with expansion. The final stages of the evolution of the proton (top panels) and alpha (bottom panels) velocity distributions in the V∥-V⊥ plane in the 1.5D hybrid model initialized with the broadband spectrum of Alfvén/cyclotron waves. The formation of the accelerated particle population is evident (adapted from Maneva et al., 2013). The 2D hybrid codes solve similar set of equations as in the 1D hybrid codes but in two spatial dimensions. This allows an additional degree of freedom for particle motions, and the wave propagation is not limited to parallel propagating waves, allowing oblique propagation. In addition, the parallel magnetic field component does not have to be constant in order to conserve ∇·B = 0. As a result, a broader range of possible wave modes, wave-particle, and wave-wave interactions are included in the 2D model compared to 1D model. Obviously, the 2D models are computationally intensive, and may require parallel processing for similar resolution in 2D and similar numbers of particles per cell as in the 1D models that can be run on a desktop workstation. The 2D hybrid models have been used extensively in the past to model successfully the electromagnetic interactions in magnetized plasmas (McKean et al., 1994; Gary et al., 1997; Daughton et al., 1999; Gary et al., 2000, 2001, 2003; Ofman et al., 2001, 2002; Xie et al., 2004; Ofman and Viñas, 2007). Comparisons between one-and two-dimensional hybrid simulations often show qualitative agreement in the ion response (Winske and Quest, 1986; Ofman and Viñas, 2007). In addition to allowing oblique waves, the 2D code allows including spatial inhomogeneity of plasma density perpendicular to the magnetic field, as well as divergent magnetic field geometry. These features are needed to describe solar wind acceleration and heating more consistently with corona conditions. Recently, Ofman and Viñas (2007) studied the heating and the acceleration of protons, and heavy ions by a spectrum of waves in the solar wind, as well as the nonlinear influence of heavy ions on the wave structure using the 2D hybrid model. They considered for the first time the heating and acceleration of protons and heavy ions by a driven input spectrum of Alfvén/cyclotron waves, and by heavy ion beam in the multi-species coronal plasma in two spatial dimensions. They found that in the homogeneous plasma the ion beams heat the ions faster than the driven wave spectrum constrained by solar wind parameters, and produce temperature anisotropy with T⊥ > T∥ in qualitative agreement with observation. The beam-heating model requires that the beam speed is larger than the local Alfvén speed. Since any reconnection process produces Alfvénic beams as an exhaust (e.g., Priest, 1982; Aschwanden, 2004), the beams could readily become super-Alfvénic as the plasma moves to regions of lower local Alfvén speed. Since the threshold of beam stability is the Alfvén speed, it is possible that remnants of this process that takes place close to the Sun in the acceleration region of the solar wind are seen in proton data beyond 0.3 AU (Marsch, 2006). Below, the results obtained recently by Ofman and Viñas (2007) are reviewed. Ofman and Viñas (2007) compared the evolution of O5+ ion anisotropy relaxation by ion cyclotron instability, by modeling the coronal plasma with both 1D and 2D hybrid codes. In Figure 21 the results of a 1D and 2D hybrid models runs are shown. The initial temperature anisotropy was set to 50 in both cases with parallel temperature of 1.4 × 106 K. It is evident that the temperature anisotropy has relaxed to similar values in 1D and 2D runs, close to the marginally stable value of ∼ 10 obtained for the parameters used with the Vlasov stability analysis (Gary, 1993). The agreement that was found between 1D hybrid and 2D hybrid evolution is consistent with the Vlasov dispersion relation that shows maximal growth of the ion cyclotron instability for parallel propagating modes. Comparison of 1D and 2D model results. The evolution of O5+ temperature anisotropy calculated with the 1D hybrid (dashes) and 2D hybrid (solid) models show good agreement (Ofman and Viñas, 2007). In Figure 22 the results of the parametric study of He++ anisotropy relaxation obtained with 2D hybrid code by Gary et al. (2003) is shown. In that study the parameters were \(n_\alpha /n_e = 0.05\) with initial \(T_e = T_{\parallel p} ,T_{\parallel \alpha } /T_{\parallel p} = 4.0\), and isotropic protons. The initial anisotropy of He++ was chosen to maximize the linear growth rate of ion-cyclotron instability. The crosses show the anisotropy at t = 0, while the squares show the anisotropy at magnetic energy saturation, and the dots represent later times. The dashed line shows the best fit of the anisotropies at Ω p t = 400, that produces the scaling law \(T_{ \bot \alpha } /T_{\parallel \alpha } - 1 = 0.71/\beta ^{0.45} \). Note the qualitative agreement between the 2D hybrid study for He++ anisotropy relaxation, the 1D hybrid study for the O5+ anisotropy relaxation shown in Figure 23, and the scaling law obtained analytically (Gary, 1993). Gary et al. (2003) have shown that the model results are consistent with Ulysses in-situ observations of solar wind protons and He++ ions. Ofman and Viñnas (2007) found that the perpendicular heating occurs for the beam-driven instability, which quickly saturates nonlinearly, and due to the driven spectrum of waves. In the driven wave spectrum case the amplitude of the magnetic field fluctuations was δB/B0 = 0.06, and the frequency range of the driver was below the proton gyroresonance. It was found (see Figure 24) that the O5+ anisotropy grows quickly (within 400Ω p −1 ) to \(T_ \bot /T_\parallel \approx 4\), and than saturates nonlinearly remaining in the range 4–5 throughout rest of the evolution. The frequency range of the wave spectrum included the O5+ ion resonant frequency at rest. The anisotropy of the protons remains close to unity throughout the run. No significant net drift was found between the protons and O5+ ions in the wave driven case. Similar results were obtained for He++ ions. The temporal evolution of the temperature anisotropy, and the drift velocity for protons and O5+ ions. (a) Ions heated by the driven wave spectrum. (b) Ions heated by a beam with V d = 1.5 V A (adapted from Ofman and Viñas, 2007). The non-Maxwellian features of the ion velocity distribution are evident in the perpendicular to the magnetic field phase space plane. When the initial distribution is drifting Maxwellian with the drift velocity V d = 1.5 V A , the perpendicular velocity distribution of the O5+ is shell-like, with decreased phase-space density in the central part of the distribution, compared to the perimeter. When the initial drift velocity was increased to 2V A , the shell like structure of the phase-space density of O5+ ions becomes even more apparent (Figure 25). It is interesting to note, that the He++ perpendicular velocity distribution for the drifting case is nearly bi-Maxwellian, and does not exhibit the shell structure. The perpendicular velocity distribution of the O5+ ions obtained with 2D hybrid model with drift velocity V d = 2V A (adapted from Ofman and Viñas, 2007). Recently, Ofman (2010) expanded this study and considered in a parametric study the effect of inhomogeneous background density on the heating by high frequency circularly polarized Alfvén waves with, and without drift between the protons and heavier ions. Ofman (2010) found that the inhomogeneity, and the drift lead to increased heating of the solar wind ions, compared to the homogeneous case, and the spectrum of magnetic fluctuations steepens beyond Kolmogorov's slope of -5/3. In Figure 26 the magnetic energy fluctuations spectrum obtained in the 2D hybrid simulation with inhomogeneous background density is shown. The dashed-dotted line shows the best fit power law to the spectrum in the regions where the slope did not change considerably. Ofman (2010) found that in the low density region the slope was m = −1.66 for wave driven case, and m = −1.81 for the beam driven case. However, in the high density region the slopes were m = −2.53 for the wave driven case, and m = −2.80 for the beam driven case, indicating enhanced dissipation due to the refraction of Alfvén waves, and the generation of small scale magnetosonic fluctuations that dissipate more effectively than Alfvénic fluctuations. Ofman et al. (2011) explored additional forms of background inhomogeneity on the magnetosonic drift instability and solar wind plasma heating by a spectrum of Alfvénic fluctuations. The expansion of the distant solar wind plasma at 0.3 AU and beyond and the generation of the associated kinetic instabilities and waves was considered in 2D hybrid models by Hellinger and Trvávnvíček (2011, 2013). The power spectrum of fluctuations in B z . (a) Middle of low density region, driven waves spectrum. The dashed line is for power law fit with m = −1.66. (b) Same as (a), but in the middle of high density region. The fit is with m = -2.53. (c) Middle of low density region, the case with V d = 2V A . The dashed line is for power law fit with m = −1.81. (d) Same as (c), but in the middle of the high density region. The dashed line is for power law fit with m = −2.80 (from Ofman, 2010). 4 Open Questions and Challenges Although significant progress was made in observing and modeling of the solar wind over the past decades, there are still several important questions that are unanswered. This situation stems from the lack of unambiguous observations, which point to a specific physical mechanism for coronal heating, and solar wind accelerations, as well as due to the limitations of present models and theories. In particular, the following questions remain open: What is the exact physical mechanism that produces the fast and slow solar wind? This question relates directly to the question of coronal heating mechanism. What is the role of waves (in a broad frequency range from kinetic to MHD) in the acceleration and heating of the solar wind? What is the role of density inhomogeneity and small scale turbulence (cascade) in the heating and the acceleration of the fast and slow solar wind? How are the non-Maxwellian velocity distributions of protons and ions in the solar wind formed? What determines the heavy ion composition (i.e., elemental abundance and charge states, see Zurbuchen, 2007) of the fast and slow solar wind? What is the role of electrons in solar wind acceleration and heating? How does Earth's global space environment respond to solar wind variations? The works reviewed here bear on the first five questions, showing that MHD waves with a given spectrum provide plausible acceleration mechanism of the fast solar wind in coronal holes, and heating of coronal plasma may occur through resonant and non-resonant dissipation of the waves energy. However, the fluid models do not provide the kinetic details of the dissipation processes, and the hybrid models show only limited aspects of the resonant dissipation processes. The formation and the evolution of the ion velocity distribution of the solar wind plasma is not modeled in detail from the Sun to 1 AU. Multi-fluid models address limited aspects (i.e., within the ion-fluids approximation) of the compositional variation of the solar wind in open and closed structures (Ofman, 2000, 2004a). In the reviewed models the electrons are only studied as a fluid and their role in solar wind heating and acceleration only includes basic aspects (i.e., heat conduction, collisions coupling to ions). The last question can be addressed with global models that include the Earth's magnetosphere and ionosphere. However, the study of the kinetic processes that are at the roots of the solar-wind—magnetosphere—ionosphere interactions is far from complete. The above questions are on the forefront of current research, and the answer can be obtained by combination of improved observations and modeling. A possible way to answer these questions is by obtaining in-situ measurements of the solar wind plasma in the region close to the Sun, where the acceleration and heating processes are still significant (McComas et al., 2007). This is the goal of the future European Solar Orbiter, and NASA's Solar Probe Plus missions. 5 Summary and Discussion Satellite observations provide ample evidence for the presence of low-frequency (MHD) waves in the solar corona and the solar wind. The presence of ion-cyclotron waves and beams is evident as well in in-situ observations at 0.3 AU and beyond, and deduced from remote-sensing spectroscopic observations. Motivated by these observations wave-driven solar wind acceleration and heating models were developed with various degrees of approximation. The formation and the effect of beams on solar wind plasma heating were studied with hybrid models. In the present paper we have reviewed several such models of solar wind acceleration and plasma heating. The emphasis in this review is on wave driven models with fully resolved wave spectrum in 2.5D MHD, 2.5D multi-fluid models, and in hybrid kinetic 1D and 2D models. Thermal conduction alone can not explain the acceleration of the solar wind to fast wind speed for plasma temperatures of 1–2 MK commonly deduced from observations in open magnetic structures. The 2.5D MHD and multi-fluid models show that Alfvén wave spectrum in the MHD frequency range (millihertz) accelerates the fast solar wind to the observed speed of ∼ 800 km s−1 and provides the necessary energy to heat the solar wind. The advantage of the WKB approximation is that it allows incorporating the effects of Alfvén wave heating and acceleration in global 3D MHD models. However, the models that include fully resolved waves provide more accurate and realistic account of the interaction between the waves and the solar wind plasma than the WKB approximation and the MHD models that use ad-hoc heating function, momentum input, or variation of the polytropic index with distance from the Sun. The main limitations of the wave-driven solar wind MHD models are that the heating is described by Ohmic and viscous dissipation with empirical dissipation coefficients, and the exact kinetic process that underlay the fluid description can only be modeled in detail by kinetic approach. Multi-fluid models extend beyond MHD by providing insights on the compositional variation of the solar wind plasma, on separate heating processes for electrons, protons, and heavy ions, and on the interactions between the various plasma constituents. The results of multi-fluid models are compared directly with observations of the coronal emission, consisting of ion emission lines, and white light polarized brightness that comes from electron Thompson scattering. These comparisons provide more stringent observational constrains on solar wind models than can be achieved with single fluid MHD, since all modeled particle species must conform to the observed properties (e.g., electron temperature, proton temperature, relative abundance of heavy ions in various magnetic structures, wave signatures in separate fluids, etc.), and the various fluids are coupled through Coulomb and electromagnetic interactions. The 1D and 2D hybrid models provide the next level of physical modeling, and are reliable tools that have been tested and used for decades to study ion kinetic processes in space plasmas. The reviewed studies concentrate on the resonant dissipation of wave spectrum in the multi-ion solar wind plasma and include the effects of beams. The models show that the high frequency waves in the proton and ion gyrosresonant frequency range can heat the solar wind heavy ions preferentially and anisotropically and produce the anisotropic ion velocity distributions deduced from observations. High-amplitude waves can lead to beam formation, while solar wind expansion can lead to perpendicular cooling of the ions. The hybrid models show that heating can be enhanced further by the instability of super-Alfvénic beams of heavy ions. The reviewed studies show that protons are not heated significantly by these waves due to resonant absorption by heavier ions. Thus, the spectrum of waves that heats and accelerates the solar wind must contain both, lowfrequency (non-resonant) and high-frequency Alfvén waves. The hybrid models do not include the kinetics of electrons, and their possible role in solar wind energy balance and the dissipation of low-frequency waves is not modeled beyond the fluid description. The planned NASA Solar Probe Plus mission and the European Solar Orbiter missions will provide new measurements in the unexplored region of the inner heliosphere. In particular, in-situ measurement of non-Maxwellian features in proton, ion, and electron velocity distributions such as anisotropy and beams, and measurement of magnetic fluctuations spectrum in the acceleration region of the solar wind close to the Sun will provide the necessary information that will improve our understanding of solar wind acceleration and heating. These measurements will provide improved constraints for future theoretical studies and numerical models of solar wind plasma heating and acceleration for all levels of plasma approximations. The author would like to acknowledge support by NASA grants NNX08AF85G, NNX08AV88G, and NNX10AC56G. Abbo, L., Ofman, L. and Giordano, S., 2010, "Streamers study at solar minimum: combination of UV observations and numerical modeling", in Twelfth International Solar Wind Conference, Saint-Malo, France, 21–26 June 2009, (Eds.) Maksimovic, M., Issautier, K., Meyer-Vernet, N., Moncuquet, M., Pantelli, F., AIP Conference Proceedings, 1216, pp. 387–390, American Institute of Physics, Melville, NY. [DOI], [ADS] (Cited on page 7.)Google Scholar Airapetian, V., Ofman, L., Sittler, E.C. and Kramar, M., 2011, "Probing the Thermodynamics and Kinematics of Solar Coronal Streamers", Astrophys. J., 728, 67. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Alazraki, G. and Couturier, P., 1971, "Solar Wind Accejeration Caused by the Gradient of Alfvén Wave Pressure", Astron. Astrophys., 13, 380–389. [ADS] (Cited on page 7.)ADSGoogle Scholar Antonucci, E., Dodero, M.A. and Giordano, S., 2000, "Fast Solar Wind Velocity in a Polar Coronal Hole during Solar Minimum", Solar Phys., 197, 115–134. [DOI], [ADS] (Cited on page 21.)ADSCrossRefGoogle Scholar Araneda, J.A., Marsch, E. and Viñnas, A.F., 2007, "Collisionless damping of parametrically unstable Alfvén waves", J. Geophys. Res., 112(A11), 4104. [DOI], [ADS] (Cited on pages 10 and 27.)Google Scholar Araneda, J.A., Marsch, E. and Viñas, A.F., 2008, "Proton Core Heating and Beam Formation via Parametrically Unstable Alfvén-Cyclotron Waves", Phys. Rev. Lett., 100, 125003. [DOI], [ADS] (Cited on page 27.)ADSCrossRefGoogle Scholar Aschwanden, M.J., 2004, Physics of the Solar Corona: An Introduction, Springer-Praxis Books in Geophysical Sciences, Springer; Praxis, Berlin; New York; Chichester. [ADS], [Google Books] (Cited on pages 16 and 31.)Google Scholar Axford, W.I. and McKenzie, J.F., 1992, "The origin of high speed solar wind streams", in Solar Wind Seven, Proceedings of the 3rd COSPAR Colloquium held in Goslar, Germany, 16–20 September 1991, (Eds.) Marsch, E., Schwenn, R., COSPAR Colloquia Series, 3, pp. 1–5, Pergamon Press, Oxford; New York. [ADS] (Cited on page 8.)CrossRefGoogle Scholar Balogh, A., Beek, T.J., Forsyth, R.J., Hedgecock, P.C., Marquedant, R.J., Smith, E.J., Southwood, D.J. and Tsurutani, B.T., 1992, "The magnetic field investigation on the Ulysses mission: Instrumentation and preliminary scientific results", Astron. Astrophys. Suppl., 92, 221–236. [ADS] (Cited on page 6.)ADSGoogle Scholar Banerjee, D., Gupta, G.R. and Teriaca, L., 2011, "Propagating MHD Waves in Coronal Holes", Space Sci. Rev., 158, 267–288. [DOI], [ADS], [arXiv:1009.2980 [astro-ph.SR]] (Cited on page 9.)ADSCrossRefGoogle Scholar Barnes, A., 1969, "Collisionless Heating of the Solar-Wind Plasma. II. Application of the Theory of Plasma Heating by Hydromagnetic Waves", Astrophys. J., 155, 311. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Belcher, J.W., 1971, "Alfvénic Wave Pressures and the Solar Wind", Astrophys. J., 168, 509–524. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Belcher, J.W. and Davis Jr, L., 1971, "Large-Amplitude Alfvén Waves in the Interplanetary Medium, 2", J. Geophys. Res., 76(16), 3534–3563. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Braginskii, S.I., 1965, "Transport processes in plasma", in Review of Plasma Physics, (Ed.) Leontovich, M.A., Review of Plasma Physics, 1, pp. 201–311, Consultants Bureau, New York (Cited on pages 11 and 12.)ADSGoogle Scholar Bruno, R. and Carbone, V., 2013, "The Solar Wind as a Turbulence Laboratory", Living Rev. Solar Phys., 10, lrsp-2013-2. [DOI], [ADS]. URL (accessed 22 November 2013: http://www.livingreviews.org/lrsp-2013-2 (Cited on page 9.) Canals, A., Breen, A.R., Ofman, L., Moran, P.J. and Fallows, R.A., 2002, "Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements", Ann. Geophys., 20, 1265–1277. [DOI], [ADS] (Cited on page 17.)ADSCrossRefGoogle Scholar Chandran, B.D.G., 2010, "Alfvén-wave Turbulence and Perpendicular Ion Temperatures in Coronal Holes", Astrophys. J., 720, 548–554. [DOI], [ADS], [arXiv:1006.3473 [astro-ph.SR]] (Cited on page 8.)ADSCrossRefGoogle Scholar Chandran, B.D.G. and Hollweg, J.V., 2009, "Alfvén Wave Reflection and Turbulent Heating in the Solar Wind from 1 Solar Radius to 1AU: An Analytical Treatment", Astrophys. J., 707, 1659–1667. [DOI], [ADS], [arXiv:0911.1068] (Cited on page 8.)ADSCrossRefGoogle Scholar Chandran, B.D.G., Quataert, E., Howes, G.G., Xia, Q. and Pongkitiwanichakul, P., 2009, "Constraining Low-Frequency Alfvénic Turbulence in the Solar Wind Using Density-Fluctuation Measurements", Astrophys. J., 707, 1668–1675. [DOI], [ADS], [arXiv:0908.0757] (Cited on page 8.)ADSCrossRefGoogle Scholar Chandran, B.D.G., Pongkitiwanichakul, P., Isenberg, P.A., Lee, M.A., Markovskii, S.A., Hollweg, J.V. and Vasquez, B.J., 2010, "Resonant Interactions Between Protons and Oblique Alfvéen/Ion-cyclotron Waves in the Solar Corona and Solar Flares", Astrophys. J., 722, 710–720. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Chandran, B.D.G., Dennis, T.J., Quataert, E. and Bale, S.D., 2011, "Incorporating Kinetic Physics into a Two-fluid Solar-wind Model with Temperature Anisotropy and Low-frequency Alfvén-wave Turbulence", Astrophys. J., 743, 197. [DOI], [ADS], [arXiv:1110.3029 [astro-ph.SR]] (Cited on page 8.)ADSCrossRefGoogle Scholar Cohen, O., Sokolov, I.V., Roussev, I.I. et al., 2007, "A Semiempirical Magnetohydrodynamical Model of the Solar Wind", Astrophys. J. Lett., 654, L163–L166. [DOI], [ADS] (Cited on pages 7 and 11.)ADSCrossRefGoogle Scholar Colgan, J., Abdallah Jr, J., Sherrill, M.E., Foster, M., Fontes, C.J. and Feldman, U., 2008, "Radiative Losses of Solar Coronal Plasmas", Astrophys. J., 689, 585–592. [DOI], [ADS] (Cited on page 11.)ADSCrossRefGoogle Scholar Cranmer, S.R., 2000, "Ion Cyclotron Wave Dissipation in the Solar Corona: The Summed Effect of More than 2000 Ion Species", Astrophys. J., 532, 1197–1208. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Cranmer, S.R., 2012, "Self-Consistent Models of the Solar Wind", Space Sci. Rev., 172, 145–156. [DOI], [ADS], [arXiv:1007.0954 [astro-ph.SR]] (Cited on page 10.)ADSCrossRefGoogle Scholar Cranmer, S.R. and van Ballegooijen, A.A., 2005, "On the generation, propagation, and reflection of Alfvén waves from the solar photosphere to the distant heliosphere", Astrophys. J. Suppl. Ser., 156, 265–293. [DOI], [ADS], [astro-ph/0410639] (Cited on pages 7, 8, and 9.)ADSCrossRefGoogle Scholar Cranmer, S.R. and van Ballegooijen, A.A., 2012, "Proton, Electron, and Ion Heating in the Fast Solar Wind from Nonlinear Coupling between Alfvénic and Fast-mode Turbulence", Astrophys. J., 754, 92. [DOI], [ADS], [arXiv:1205.4613 [astro-ph.SR]] (Cited on page 8.)ADSCrossRefGoogle Scholar Cranmer, S.R., Field, G.B. and Kohl, J.L., 1999, "Spectroscopic Constraints on Models of Ion Cyclotron Resonance Heating in the Polar Solar Corona and High-Speed Solar Wind", Astrophys. J., 518, 937–947. [DOI], [ADS] (Cited on pages 7, 21, 25, and 27.)ADSCrossRefGoogle Scholar Cranmer, S.R., van Ballegooijen, A.A. and Edgar, R.J., 2007, "Self-consistent Coronal Heating and Solar Wind Acceleration from Anisotropic Magnetohydrodynamic Turbulence", Astrophys. J. Suppl. Ser., 171, 520–551. [DOI], [ADS], [arXiv:astro-ph/0703333] (Cited on page 7.)ADSCrossRefGoogle Scholar Daughton, W., Gary, S.P. and Winske, D., 1999, "Electromagnetic proton/proton instabilities in the solar wind: Simulations", J. Geophys. Res., 104(A3), 4657–4668. [DOI], [ADS] (Cited on page 31.)ADSCrossRefGoogle Scholar De Pontieu, B., McIntosh, S.W., Carlsson, M. et al., 2007, "Chromospheric Alfvénic Waves Strong Enough to Power the Solar Wind", Science, 318, 1574–1577. [DOI], [ADS] (Cited on page 9.)ADSCrossRefGoogle Scholar Downs, C., Roussev, I.I., van der Holst, B., Lugaz, N., Sokolov, I.V. and Gombosi, T.I., 2010, "Toward a Realistic Thermodynamic Magnetohydrodynamic Model of the Global Solar Corona", Astrophys. J., 712, 1219–1231. [DOI], [ADS], [arXiv:0912.2647 [astro-ph.SR]] (Cited on page 11.)ADSCrossRefGoogle Scholar Dwivedi, N.K., Batra, K. and Sharma, R.P., 2012, "Study of kinetic Alfvén wave and whistler wave spectra and their implication in solar wind plasma", J. Geophys. Res., 117, A07201. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Evans, R.M., Opher, M., Jatenco-Pereira, V. and Gombosi, T.I., 2009, "Surface Alfvén Wave Damping in a Three-Dimensional Simulation of the Solar Wind", Astrophys. J., 703, 179–186. [DOI], [ADS], [arXiv:0908.3146] (Cited on page 7.)ADSCrossRefGoogle Scholar Evans, R.M., Opher, M., Oran, R., van der Holst, B., Sokolov, I.V., Frazin, R., Gombosi, T.I. and Vásquez, A., 2012, "Coronal Heating by Surface Alfvén Wave Damping: Implementation in a global Magnetohydrodynamics Model of the Solar Wind", Astrophys. J., 756, 155. [DOI], [ADS] (Cited on page 16.)ADSCrossRefGoogle Scholar Feldman, W.C., Barraclough, B.L., Phillips, J.L. and Wang, Y.-M., 1996, "Constraints on high-speed solar wind structure near its coronal base: a ULYSSES perspective", Astron. Astrophys., 316, 355–367. [ADS] (Cited on pages 8 and 21.)ADSGoogle Scholar Galinsky, V.L. and Shevchenko, V.I., 2013a, "Acceleration of the Solar Wind by Alfvén Wave Packets", Astrophys. J., 763, 31. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Galinsky, V.L. and Shevchenko, V.I., 2013b, "Induced Emission of Alfvén Waves in Inhomogeneous Streaming Plasma: Implications for Solar Corona Heating and Solar Wind Acceleration", Phys. Rev. Lett., 111, 015004. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Gary, S.P., 1993, Theory of Space Plasma Microinstabilities, Cambridge Atmospheric and Space Science Series, Cambridge University Press, Cambridge; New York. [Google Books] (Cited on pages 25 and 32.)CrossRefGoogle Scholar Gary, S.P., Wang, J., Winske, D. and Fuselier, S.A., 1997, "Proton temperature anisotropy upper bound", J. Geophys. Res., 102(A12), 27,159–27,170. [DOI], [ADS] (Cited on page 31.)CrossRefGoogle Scholar Gary, S.P., Yin, L., Winske, D. and Reisenfeld, D.B., 2000, "Electromagnetic alpha/proton instabilities in the solar wind", Geophys. Res. Lett., 27(9), 1355–1358. [DOI], [ADS] (Cited on page 31.)ADSCrossRefGoogle Scholar Gary, S.P., Yin, L., Winske, D. and Ofman, L., 2001, "Electromagnetic heavy ion cyclotron instability: Anisotropy constraint in the solar corona", J. Geophys. Res., 106, 10,715–10,722. [DOI], [ADS] (Cited on pages 10 and 31.)Google Scholar Gary, S.P., Yin, L., Winske, D., Ofman, L., Goldstein, B.E. and Neugebauer, M., 2003, "Consequences of proton and alpha anisotropies in the solar wind: Hybrid simulations", J. Geophys. Res., 108(A2), 1068. [DOI], [ADS] (Cited on pages 10, 25, 31, and 32.)CrossRefGoogle Scholar Gary, S.P., Yin, L. and Winske, D., 2006, "Alfvén-cyclotron scattering of solar wind ions: Hybrid simulations", J. Geophys. Res., 111(A10), 6105. [DOI], [ADS] (Cited on page 10.)CrossRefGoogle Scholar Gazis, P.R. and Lazarus, A.J., 1982, "Voyager observations of solar wind proton temperature: 1–10 AU", Geophys. Res. Lett., 9, 431–434. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Goldstein, B.E., Smith, E.J., Balogh, A., Horbury, T.S., Goldstein, M.L. and Roberts, D.A., 1995, "Properties of magnetohydrodynamic turbulence in the solar wind as observed by Ulysses at high heliographic latitudes", Geophys. Res. Lett., 22, 3393–3396. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Grall, R.R., Coles, W.A., Klinglesmith, M.T., Breen, A.R., Williams, P.J.S., Markkanen, J. and Esser, R., 1996, "Rapid acceleration of the polar solar wind", Nature, 379, 429. [DOI], [ADS] (Cited on page 17.)ADSCrossRefGoogle Scholar Grappin, R. and Velli, M., 1996, "Waves and streams in the expanding solar wind", J. Geophys. Res., 101, 425–444. [DOI], [ADS] (Cited on pages 14 and 27.)ADSCrossRefGoogle Scholar Grappin, R., Léorat, J. and Habbal, S.R., 2002, "Large-amplitude Alfvén waves in open and closed coronal structures: A numerical study", J. Geophys. Res., 107, 1380. [DOI], [ADS] (Cited on page 16.)CrossRefGoogle Scholar Guhathakurta, M., Sittler Jr, E.C. and Ofman, L., 2006, "Semiempirically derived heating function of the corona heliosphere during the Whole Sun Month", J. Geophys. Res., 111, A11215. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Hahn, M. and Savin, D.W., 2013, "Observational Quantification of the Energy Dissipated by Alfvén Waves in a Polar Coronal Hole: Evidence that Waves Drive the Fast Solar Wind", Astrophys. J., 776, 78. [DOI], [ADS], [arXiv:1302.5403 [astro-ph.SR]] (Cited on page 9.)ADSCrossRefGoogle Scholar Hahn, M., Landi, E. and Savin, D.W., 2012, "Evidence ofWave Damping at Low Heights in a Polar Coronal Hole", Astrophys. J., 753, 36. [DOI], [ADS], [arXiv:1202.1743 [astro-ph.SR]] (Cited on page 9.)ADSCrossRefGoogle Scholar Hansteen, V.H. and Velli, M., 2012, "Solar Wind Models from the Chromosphere to 1 AU", Space Sci. Rev., 172, 89.121. [DOI], [ADS] (Cited on page 10.)CrossRefGoogle Scholar Heinemann, M. and Olbert, S., 1980, "Non-WKB Alfvén waves in the Solar Wind", J. Geophys. Res., 85(A3), 1311.1327. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Hellinger, P. and Trávnäček, P., 2006, "Parallel and oblique proton fire hose instabilities in the presence of alpha/proton drift: Hybrid simulations", J. Geophys. Res., 111(A10), 1107. [DOI], [ADS] (Cited on page 10.)CrossRefGoogle Scholar Hellinger, P. and Trávnäček, P.M., 2011, "Proton core-beam system in the expanding solar wind: Hybrid simulations", J. Geophys. Res., 116(A15), A11101. [DOI], [ADS] (Cited on page 34.)ADSGoogle Scholar Hellinger, P. and Trávnäček, P.M., 2013, "Protons and alpha particles in the expanding solar wind: Hybrid simulations", J. Geophys. Res., 118, 5421.5430. [DOI], [ADS] (Cited on page 34.)Google Scholar Hellinger, P., Velli, M., Trávnäček, P., Gary, S.P., Goldstein, B.E. and Liewer, P.C., 2005, "Alfvén wave heating of heavy ions in the expanding solar wind: Hybrid simulations", J. Geophys. Res., 110(9), A12109. [DOI], [ADS] (Cited on pages 10 and 14.)ADSCrossRefGoogle Scholar Hollweg, J.V., 2000, "Cyclotron resonance in coronal holes: 3. A five-beam turbulence-driven model", J. Geophys. Res., 105(A7), 15,699.15,714. [DOI], [ADS] (Cited on page 8.)Google Scholar Hollweg, J.V. and Isenberg, P.A., 2002, "Generation of the fast solar wind: A review with emphasis on the resonant cyclotron interaction", J. Geophys. Res., 107(A7), 1147. [DOI], [ADS] (Cited on page 8.)CrossRefGoogle Scholar Hu, Y.Q., Esser, R. and Habbal, S.R., 2000, "A four-fluid turbulence-driven solar wind model for preferential acceleration and heating of heavy ions", J. Geophys. Res., 105, 5093–5112. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Isenberg, P.A., 2004, "The kinetic shell model of coronal heating and acceleration by ion cyclotron waves: 3. The proton halo and dispersive waves", J. Geophys. Res., 109, A03101. [DOI], [ADS] (Cited on page 8.)ADSGoogle Scholar Isenberg, P.A., 2012, "A self-consistent marginally stable state for parallel ion cyclotron waves", Phys. Plasmas, 19(3), 032116. [DOI], [ADS], [arXiv:1203.1938 [physics.plasm-ph]] (Cited on page 8.)ADSMathSciNetCrossRefGoogle Scholar Isenberg, P.A. and Vasquez, B.J., 2011, "A Kinetic Model of Solar Wind Generation by Oblique Ioncyclotron Waves", Astrophys. J., 731, 88. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Kaghashvili, E.K., Vasquez, B.J. and Hollweg, J.V., 2003, "Deceleration of streaming alpha particles interacting with waves and imbedded rotational discontinuities", J. Geophys. Res., 108(A1), 1036. [DOI], [ADS] (Cited on page 10.)ADSCrossRefGoogle Scholar Ko, Y.-K., Li, J., Riley, P. and Raymond, J.C., 2008, "Large-Scale Coronal Density and Abundance Structures and Their Association with Magnetic Field Structure", Astrophys. J., 683, 1168–1179. [DOI], [ADS] (Cited on page 6.)ADSCrossRefGoogle Scholar Kohl, J.L., Noci, G., Antonucci, E. et al., 1997, "First Results from the SOHO Ultraviolet Coronagraph Spectrometer", Solar Phys., 175, 613–644. [DOI], [ADS] (Cited on pages 6, 7, 12, 21, 25, and 27.)ADSCrossRefGoogle Scholar Kohl, J.L., Noci, G., Antonucci, E. et al., 1998, "UVCS/SOHO Empirical Determinations of Anisotropic Velocity Distributions in the Solar Corona", Astrophys. J. Lett., 501, L127–L131. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Landi, E. and Landini, M., 1999, "Radiative losses of optically thin coronal plasmas", Astron. Astrophys., 347, 401–408. [ADS] (Cited on page 11.)ADSGoogle Scholar Lau, Y.-T. and Siregar, E., 1996, "Nonlinear Alfven Wave Propagation in the Solar Wind", Astrophys. J., 465, 451. [DOI], [ADS] (Cited on page 16.)ADSCrossRefGoogle Scholar Lepping, R.P., Acuña, M.H., Burlaga, L.F. et al., 1995, "The Wind Magnetic Field Investigation", Space Sci. Rev., 71, 207–229. [DOI], [ADS] (Cited on page 6.)ADSCrossRefGoogle Scholar Li, B., Xia, L.D. and Chen, Y., 2011, "Solar winds along curved magnetic field lines", Astron. Astrophys., 529, A148. [DOI], [ADS], [arXiv:1103.5211 [astro-ph.SR]] (Cited on page 8.)ADSCrossRefGoogle Scholar Li, X. and Habbal, S.R., 2005, "Hybrid simulation of ion cyclotron resonance in the solar wind: Evolution of velocity distribution functions", J. Geophys. Res., 110, A10109. [DOI], [ADS] (Cited on page 10.)ADSCrossRefGoogle Scholar Li, X., Esser, R., Habbal, S.R. and Hu, Y., 1997, "Influence of heavy ions on the high-speed solar wind", J. Geophys. Res., 102(A8), 17,419–17,432. [DOI], [ADS] (Cited on page 12.)CrossRefGoogle Scholar Li, X., Habbal, S.R., Kohl, J. and Noci, G., 1998, "The Effect of Temperature Anisotropy on Observations of Doppler Dimming and Pumping in the Inner Corona", Astrophys. J. Lett., 501, L133–L137. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Li, X., Habbal, S.R., Hollweg, J.V. and Esser, R., 1999, "Heating and cooling of protons by turbulencedrivenion cyclotron waves in the fast solar wind", J. Geophys. Res., 104(A2), 2521–2535. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Liewer, P.C., Velli, M. and Goldstein, B.E., 2001, "Alfvén wave propagation and ion cyclotron interactions in the expanding solar wind: One-dimensional hybrid simulations", J. Geophys. Res., 106(A12), 29,261–29,282. [DOI], [ADS] (Cited on pages 10, 14, and 27.)CrossRefGoogle Scholar Linker, J.A., Mikić, Z., Biesecker, D.A. et al., 1999, "Magnetohydrodynamic modeling of the solar corona during Whole Sun Month", J. Geophys. Res., 104, 9809–9830. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Lionello, R., Linker, J.A. and Mikić, Z., 2009, "Multispectral Emission of the Sun During the First Whole Sun Month: Magnetohydrodynamic Simulations", Astrophys. J., 690, 902–912. [DOI], [ADS] (Cited on page 11.)ADSCrossRefGoogle Scholar Lu, Q.-M. and Wang, S., 2005, "Proton and He2+ Temperature Anisotropies in the Solar Wind Driven by Ion Cyclotron Waves", Chin. J. Astron. Astrophys., 5, 184–192. [DOI], [ADS] (Cited on page 10.)ADSCrossRefGoogle Scholar Maneva, Y.G., Viñas, A.F. and Ofman, L., 2013, "Turbulent heating and acceleration of He++ ions by spectra of Alfvén-cyclotron waves in the expanding solar wind: 1.5-D hybrid simulations", J. Geophys. Res., 118, 2842–2853. [DOI], [ADS] (Cited on pages 28, 29, and 30.)CrossRefGoogle Scholar Markovskii, S.A., Vasquez, B.J. and Chandran, B.D.G., 2010, "Perpendicular Proton Heating Due to Energy Cascade of Fast Magnetosonic Waves in the Solar Corona", Astrophys. J., 709, 1003–1008. [DOI], [ADS] (Cited on pages 8 and 10.)ADSCrossRefGoogle Scholar Marsch, E., 1992, "On the possible role of plasma waves in the heating of chromosphere and corona", in Solar Wind Seven, Proceedings of the 3rd COSPAR Colloquium held in Goslar, Germany, 16–20 September 1991, (Eds.) Marsch, E., Schwenn, R., COSPAR Colloquia Series, 3, pp. 65–68, Pergamon Press, Oxford; New York. [ADS] (Cited on page 8.)Google Scholar Marsch, E., 2006, "Kinetic Physics of the Solar Corona and Solar Wind", Living Rev. Solar Phys., 3, lrsp-2006-1. [DOI], [ADS]. URL (accessed 25 January 2010): http://www.livingreviews.org/lrsp-2006-1 (Cited on pages 5, 8, 27, and 31.) Marsch, E., Mühlhäuser, K.-H., Rosenbauer, H., Schwenn, R. and Neubauer, F.M., 1982a, "Solar Wind Helium Ions: Observations of the HELIOS Solar Probes Between 0.3 and 1 AU", J. Geophys. Res., 87(A1), 35–51. [DOI], [ADS] (Cited on page 21.)ADSCrossRefGoogle Scholar Marsch, E., Mühlhäuser, K.-H., Schwenn, R., Rosenbauer, H., Pilipp, W. and Neubauer, F.M., 1982b, "Solar Wind Protons: Three-Dimensional Velocity Distributions and Derived Plasma Parameters Measured Between 0.3 and 1 AU", J. Geophys. Res., 87, 52–72. [DOI], [ADS] (Cited on pages 8 and 21.)ADSCrossRefGoogle Scholar McComas, D.J., Elliott, H.A., Schwadron, N.A., Gosling, J.T., Skoug, R.M. and Goldstein, B.E., 2003, "The three-dimensional solar wind around solar maximum", Geophys. Res. Lett., 30, 1517. [DOI], [ADS] (Cited on page 5.)ADSCrossRefGoogle Scholar McComas, D.J., Velli, M., Lewis, W.S. et al., 2007, "Understanding coronal heating and solar wind acceleration: Case for in situ near-Sun measurements", Rev. Geophys., 45, RG1004. [DOI], [ADS] (Cited on page 37.)ADSCrossRefGoogle Scholar McKean, M.E., Winske, D. and Gary, S.P., 1994, "Two-dimensional simulations of ion anisotropy instabilities in the magnetosheath", J. Geophys. Res., 99, 11,141–11,154. [DOI], [ADS] (Cited on page 31.)CrossRefGoogle Scholar Mecheri, R., 2013, "Properties of Ion-Cyclotron Waves in the Open Solar Corona", Solar Phys., 282, 133–146. [DOI], [ADS], [arXiv:1202.5742 [astro-ph.SR]] (Cited on page 8.)ADSCrossRefGoogle Scholar Mikić, Z., Linker, J.A., Schnack, D.D., Lionello, R. and Tarditi, A., 1999, "Magnetohydrodynamic modeling of the global solar corona", Phys. Plasmas, 6, 2217–2224. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Neugebauer, M., Goldstein, B.E., Smith, E.J. and Feldman, W.C., 1996, "Ulysses observations of differential alpha-proton streaming in the solar wind", J. Geophys. Res., 101(A8), 17,047–17,056. [DOI], [ADS] (Cited on page 8.)CrossRefGoogle Scholar Neugebauer, M., Goldstein, B.E., Winterhalter, D., Smith, E.J., MacDowall, R.J. and Gary, S.P., 2001, "Ion distributions in large magnetic holes in the fast solar wind", J. Geophys. Res., 106, 5635–5648. [DOI], [ADS] (Cited on page 21.)ADSCrossRefGoogle Scholar Ofman, L., 2000, "Source regions of the slow solar wind in coronal streamers", Geophys. Res. Lett., 27, 2885–2888. [DOI], [ADS] (Cited on page 37.)ADSCrossRefGoogle Scholar Ofman, L., 2004a, "Three-fluid model of the heating and acceleration of the fast solar wind", J. Geophys. Res., 109, A07102. [DOI], [ADS] (Cited on pages 7, 11, 12, 20, 21, 22, 23, 24, 25, and 37.)ADSCrossRefGoogle Scholar Ofman, L., 2004b, "The origin of the slow solar wind in coronal streamers", Adv. Space Res., 33, 681–688. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Ofman, L., 2005, "MHD Waves and Heating in Coronal Holes", Space Sci. Rev., 120, 67–94. [DOI], [ADS] (Cited on pages 7 and 11.)ADSCrossRefGoogle Scholar Ofman, L., 2010, "Hybrid model of inhomogeneous solar wind plasma heating by Alfvén wave spectrum: Parametric studies", J. Geophys. Res., 115, A04108. [DOI], [ADS] (Cited on pages 10, 33, 34, and 36.)ADSCrossRefGoogle Scholar Ofman, L. and Davila, J.M., 1997, "Do First Results from SOHO UVCS Indicate That the Solar Wind Is Accelerated by Solitary Waves?", Astrophys. J. Lett., 476, L51–L54. [DOI], [ADS] (Cited on pages 7 and 16.)ADSCrossRefGoogle Scholar Ofman, L. and Davila, J.M., 1998, "Solar wind acceleration by large-amplitude nonlinear waves: Parametric study", J. Geophys. Res., 103(A10), 23,677–23,690. [DOI], [ADS] (Cited on pages 7, 16, 17, and 18.)CrossRefGoogle Scholar Ofman, L. and Davila, J.M., 2001, "Three-Fluid 2.5-dimensional Magnetohydrodynamic Model of the Effective Temperature in Coronal Holes", Astrophys. J., 553, 935–940. [DOI], [ADS] (Cited on pages 7 and 24.)ADSCrossRefGoogle Scholar Ofman, L. and Viñas, A.F., 2007, "Two-dimensional hybrid model of wave and beam heating of multi-ion solar wind plasma", J. Geophys. Res., 112, A06104. [DOI], [ADS] (Cited on pages 10, 14, 31, 32, 34, and 35.)ADSCrossRefGoogle Scholar Ofman, L. and Wang, T.J., 2008, "Hinode observations of transverse waves with flows in coronal loops", Astron. Astrophys., 482, L9–L12. [DOI], [ADS] (Cited on page 9.)ADSCrossRefGoogle Scholar Ofman, L., Viñs, A. and Gary, S.P., 2001, "Constraints on the O+5 Anisotropy in the Solar Corona", Astrophys. J. Lett., 547, L175–L178. [DOI], [ADS] (Cited on pages 10, 25, 31, and 33.)ADSCrossRefGoogle Scholar Ofman, L., Gary, S.P. and Viñas, A.F., 2002, "Resonant heating and acceleration of ions in coronal holes driven by cyclotron resonant spectra", J. Geophys. Res., 107(A12), 1461. [DOI], [ADS] (Cited on pages 10, 24, 25, 26, and 31.)CrossRefGoogle Scholar Ofman, L., Davila, J.M., Nakariakov, V.M. and Viñas, A.-F., 2005, "High-frequency Alfvén waves in multiion coronal plasma: Observational implications", J. Geophys. Res., 110, A09102. [DOI], [ADS] (Cited on pages 10, 15, 25, 27, and 28.)ADSCrossRefGoogle Scholar Ofman, L., Viñas, A.F. and Moya, P.S., 2011, "Hybrid models of solar wind plasma heating", Ann. Geophys., 29, 1071–1079. [DOI], [ADS] (Cited on pages 10, 28, and 34.)ADSCrossRefGoogle Scholar Oran, R., van der Holst, B., Landi, E., Jin, M., Sokolov, I.V. and Gombosi, T.I., 2013, "A Global Wavedriven Magnetohydrodynamic Solar Model with a Unified Treatment of Open and Closed Magnetic Field Topologies", Astrophys. J., 778, 176. [DOI], [ADS], [arXiv:1307.4510 [astro-ph.SR]] (Cited on pages 15 and 16.)ADSCrossRefGoogle Scholar Osterbrock, D.E., 1961, "The Heating of the Solar Chromosphere, Plages, and Corona by Magnetohydrodynamic Waves", Astrophys. J., 134, 347. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Parker, E.N., 1958, "Dynamics of the interplanetary gas and magnetic fields", Astrophys. J., 128, 664–676. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Parker, E.N., 1963, "The Solar-Flare Phenomenon and the Theory of Reconnection and Annihiliation of Magnetic Fields", Astrophys. J. Suppl. Ser., 8, 177–211. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Podesta, J.J., Roberts, D.A. and Goldstein, M.L., 2006, "Power spectrum of small-scale turbulent velocity fluctuations in the solar wind", J. Geophys. Res., 111, A10109. [DOI], [ADS] (Cited on page 9.)ADSCrossRefGoogle Scholar Priest, E.R., 1982, Solar Magnetohydrodynamics, Geophysics and Astrophysics Monographs, 21, Reidel, Dordrecht; Boston. [ADS], [Google Books] (Cited on pages 11 and 31.)CrossRefGoogle Scholar Roussev, I.I., Gombosi, T.I., Sokolov, I.V. et al., 2003, "A Three-dimensional Model of the Solar Wind Incorporating Solar Magnetogram Observations", Astrophys. J. Lett., 595, L57–L61. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Sittler Jr, E.C. and Guhathakurta, M., 1999, "Semiempirical Two-dimensional MagnetoHydrodynamic Model of the Solar Corona and Interplanetary Medium", Astrophys. J., 523, 812–826. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Sittler Jr, E.C. and Ofman, L., 2006, "2D MHD model of the solar corona and solar wind: Recent results", in Solar Influence on the Heliosphere and Earth's Environment: Recent Progress and Prospects, Proceedings of the ILWS Workshop, Goa, India, February 19–24, 2006, (Eds.) Gopalswamy, N., Bhattacharyya, A., pp. 128–131, Quest Publications, Mumbai. [ADS]. Online version (accessed 25 January 2010): http://cdaw.gsfc.nasa.gov/publications/ilws_goa2006/ (Cited on page 7.)Google Scholar Smith, C.W., Vasquez, B.J. and Hamilton, K., 2006, "Interplanetary magnetic fluctuation anisotropy in the inertial range", J. Geophys. Res., 111, A09111. [DOI], [ADS] (Cited on page 6.)ADSGoogle Scholar Sokolov, I.V., van der Holst, B., Oran, R. et al., 2013, "Magnetohydrodynamic Waves and Coronal Heating: Unifying Empirical and MHD Turbulence Models", Astrophys. J., 764, 23. [DOI], [ADS], [arXiv:1208.3141 [astro-ph.SR]] (Cited on page 16.)ADSCrossRefGoogle Scholar Stone, E.C., Frandsen, A.M., Mewaldt, R.A., Christian, E.R., Margolies, D., Ormes, J.F. and Snow, F., 1998, "The Advanced Composition Explorer", Space Sci. Rev., 86, 1–22. [DOI], [ADS] (Cited on page 6.)ADSCrossRefGoogle Scholar Strachan, L., Suleiman, R., Panasyuk, A.V., Biesecker, D.A. and Kohl, J.L., 2002, "Empirical densities, kinetic temperatures, and outflow velocities in the equatorial streamer belt at solar minimum", Astrophys. J., 571, 1008–1014. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Suzuki, T.K. and Inutsuka, S.-i., 2005, "Making the Corona and the Fast Solar Wind: A Self-consistent Simulation for the Low-Frequency Alfvén Waves from the Photosphere to 0.3 AU", Astrophys. J. Lett., 632, L49–L52. [DOI], [ADS], [arXiv:astro-ph/0506639] (Cited on pages 7, 17, and 19.)ADSCrossRefGoogle Scholar Suzuki, T.K. and Inutsuka, S.-I., 2006, "Solar winds driven by nonlinear low-frequency Alfvén waves from the photosphere: Parametric study for fast/slow winds and disappearance of solar winds", J. Geophys. Res., 111, A06101. [DOI], [ADS], [arXiv:astro-ph/0511006] (Cited on page 7.)ADSGoogle Scholar Tóth, G., Sokolov, I.V., Gombosi, T.I. et al., 2005, "Space Weather Modeling Framework: A new tool for the space science community", J. Geophys. Res., 110(A9), A12226. [DOI], [ADS] (Cited on page 16.)ADSCrossRefGoogle Scholar Tu, C.-Y. and Marsch, E., 1995, "MHD structures, waves and turbulence in the solar wind: Observations and theories", Space Sci. Rev., 73(1/2), 1–210. [DOI], [ADS] (Cited on page 9.)ADSCrossRefGoogle Scholar Tu, C.-Y. and Marsch, E., 1997, "Two-Fluid Model for Heating of the Solar Corona and Acceleration of the Solar Wind by High-Frequency Alfvén Waves", Solar Phys., 171, 363–391. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Usmanov, A.V. and Goldstein, M.L., 2003, "A tilted-dipole MHD model of the solar corona and solar wind", J. Geophys. Res., 108, 1354. [DOI], [ADS] (Cited on page 7.)CrossRefGoogle Scholar Usmanov, A.V., Goldstein, M.L., Besser, B.P. and Fritzer, J.M., 2000, "A global MHD solar wind model with WKB Alfvén waves: Comparison with Ulysses data", J. Geophys. Res., 105(A6), 12,675–12,696. [DOI], [ADS] (Cited on pages 7 and 16.)CrossRefGoogle Scholar Uzzo, M., Strachan, L. and Vourlidas, A., 2007, "The Physical Properties of Coronal Streamers. II.", Astrophys. J., 671, 912–925. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar van der Holst, B., Manchester IV, W.B., Frazin, R.A., Vásquez, A.M., Tóth, G. and Gombosi, T.I., 2010, "A Data-driven, Two-temperature Solar Wind Model with Alfvén Waves", Astrophys. J., 725, 1373. [DOI], [ADS] (Cited on page 16.)ADSCrossRefGoogle Scholar Vásquez, A.M., van Ballegooijen, A.A. and Raymond, J.C., 2003, "The Effect of Proton Temperature Anisotropy on the Solar Minimum Corona and Wind", Astrophys. J., 598, 1361–1374. [DOI], [ADS], [arXiv:astro-ph/0310846] (Cited on page 7.)ADSCrossRefGoogle Scholar Vasquez, B.J., Smith, C.W., Hamilton, K., MacBride, B.T. and Leamon, R.J., 2007, "Evaluation of the turbulent energy cascade rates from the upper inertial range in the solar wind at 1 AU", J. Geophys. Res., 112(A11), 7101. [DOI], [ADS] (Cited on page 9.)CrossRefGoogle Scholar Velli, M., 2003, "MHD turbulence and the heating of astrophysical plasmas", Plasma Phys. Control. Fusion, 45(26), A205–A216. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Verdini, A., Velli, M. and Buchlin, E., 2009, "Turbulence in the Sub-Alfvénic Solar Wind Driven by Reflection of Low-Frequency Alfvén Waves", Astrophys. J. Lett., 700, L39–L42. [DOI], [ADS], [arXiv:0905.2618] (Cited on page 8.)ADSCrossRefGoogle Scholar Verdini, A., Velli, M., Matthaeus, W.H., Oughton, S. and Dmitruk, P., 2010, "A Turbulence-Driven Model for Heating and Acceleration of the Fast Wind in Coronal Holes", Astrophys. J. Lett., 708, L116–L120. [DOI], [ADS], [arXiv:0911.5221] (Cited on page 8.)ADSCrossRefGoogle Scholar Voitenko, Y. and Goossens, M., 2006, "Energization of Plasma Species by Intermittent Kinetic Alfvén Waves", Space Sci. Rev., 122, 255–270. [DOI], [ADS] (Cited on page 8.)ADSCrossRefGoogle Scholar Wambecq, A., 1978, "Rational Runge-Kutta Methods for Solving Systems of Ordinary Differential Equations", Computing, 20, 333–342. [DOI] (Cited on page 14.)MathSciNetzbMATHCrossRefGoogle Scholar Winske, D. and Omidi, N., 1993, "Hybrid Codes: Methods and Applications", in Computer Space Plasma Physics: Simulation Techniques and Software, International School for Space Simulations (ISSS-4), Kyoto, March 25–30, 1990 and Nara, April 2–6, 1990, (Eds.) Matsumoto, H., Omura, Y., pp. 103–160, Terra Scientific Publishing, Tokyo. Online version (accessed 25 January 2010): http://www.terrapub.co.jp/e-library/cspp/ (Cited on pages 9 and 16.)Google Scholar Winske, D. and Quest, K.B., 1986, "Electromagnetic ion beam instabilities: Comparison of one- and two-dimensional simulations", J. Geophys. Res., 91, 8789–8797. [DOI], [ADS] (Cited on page 31.)ADSCrossRefGoogle Scholar Xie, H., Ofman, L. and Viñas, A.F., 2004, "Multiple ions resonant heating and acceleration by Alfvén/cyclotron fluctuations in the corona and the solar wind", J. Geophys. Res., 109, A08103. [DOI], [ADS] (Cited on pages 10 and 31.)ADSGoogle Scholar Zurbuchen, T.H., 2007, "A New View of the Coupling of the Sun and the Heliosphere", Annu. Rev. Astron. Astrophys., 45, 297–338. [DOI], [ADS] (Cited on page 37.)ADSCrossRefGoogle Scholar 1.NASA Goddard Space Flight CenterCatholic University of America GreenbeltUSA Ofman, L. Living Rev. Sol. Phys. (2010) 7: 4. https://doi.org/10.12942/lrsp-2010-4 First Online 15 October 2010 DOI https://doi.org/10.12942/lrsp-2010-4 This article is published under an open access license. Please check the 'Copyright Information' section for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
CommonCrawl
Fekete-Szegő inequalities for certain class of analytic functions connected with q-analogue of Bessel function Sheza M. El-Deeb ORCID: orcid.org/0000-0002-4052-391X1,2 & Teodor Bulboacă ORCID: orcid.org/0000-0001-8026-218X3 In this paper, we obtain Fekete-Szegő inequalities for a certain class of analytic functions f satisfying \(1+\frac {1}{\zeta }\left [\frac {z\left (\mathcal {N}_{\nu,q}^{\lambda }f(z)\right)^{\prime }} {(1-\gamma)\mathcal {N} _{\nu,q}^{\lambda }f(z)+\gamma z\left (\mathcal {N}_{\nu,q}^{\lambda }f(z) \right)^{\prime }}-1\right ]\prec \Psi (z)\). Application of our results to certain functions defined by convolution products with a normalized analytic function is given, and in particular, Fekete-Szegő inequalities for certain subclasses of functions defined through Poisson distribution are obtained. Let \(\mathcal {A}\) denote the class of analytic functions of the form: $$ f(z)=z+\sum\limits_{k=2}^{\infty}a_{k}z^{k},\;z\in\mathbb{D}:=\{z\in\mathbb{C }:|z|<1\}, $$ and \(\mathcal {S}\) be the subclass of \(\mathcal {A}\) which are univalent functions in \(\mathbb {D}\). If \(k\in \mathcal {A}\) is given by: $$ k(z)=z+\sum\limits_{k=2}^{\infty}b_{k}z^{k},\;z\in\mathbb{D}, $$ then, the Hadamard (or convolution) product of f and k is defined by: $$ (f\times k)(z):=z+\sum\limits_{k=2}^{\infty}a_{k}b_{k}z^{k},\;z\in\mathbb{D}. $$ If f and F are analytic functions in \(\mathbb {D}\), we say that fis subordinate toF, written f≺F, if there exists a Schwarz functionw, which is analytic in \(\mathbb {D}\), with w(0)=0, and |w(z)|<1 for all \(z\in \mathbb {D}\), such that \(f(z)=F(w(z)), z\in \mathbb {D}\). Furthermore, if the function F is univalent in \(\mathbb {D} \), then we have the following equivalence (see [1] and [2]): $$f(z)\prec F(z)\Leftrightarrow f(0)=F(0)\;\text{and}\;f(\mathbb{D})\subset F(\mathbb{D}). $$ The Bessel function of the first kind of order ν is defined by the infinite series: $$J_{\nu}(z):=\sum\limits_{k\geq0}\frac{(-1)^{k}\left(\frac{z}{2} \right)^{2k+\nu}}{k!\Gamma\left(k+\nu+1\right)}, \;z\in\mathbb{C},\quad\left(\nu\in\mathbb{R}\right), $$ where Γ stands for the Gamma function. Recently, Szász and Kupán [3] investigated the univalence of the normalized Bessel function of the first kind \(g_{\nu }:\mathbb {D}\rightarrow \mathbb {C}\) defined by (see also [4–6]) $$\begin{array}{*{20}l} g_{\nu}(z):=2^{\nu}\Gamma(\nu+1)z^{1-\frac{\nu}{2}}J_{\nu}(z^{\frac{1}{2}}) \\ =z+\sum\limits_{k=2}^{\infty}\frac{(-1)^{k-1}\Gamma(\nu+1)}{ 4^{k-1}(k-1)!\Gamma(k+\nu)} z^{k},\;z\in\mathbb{D},\quad\left(\nu\in\mathbb{R }\right). \end{array} $$ For 0<q<1, the q-derivative operator for gν is defined by: $$\begin{array}{*{20}l} \partial_{q}g_{\nu}(z)= \partial_{q}\left[z+\sum\limits_{k=2}^{\infty}\frac{ (-1)^{k-1}\Gamma(\nu+1)}{4^{k-1}(k-1)! \Gamma(k+\nu)}z^{k}\right]:=\frac{ g_{\nu}(qz)-g_{\nu}(z)}{z(q-1)}= \notag \\ 1+\sum\limits_{k=2}^{\infty}\frac{(-1)^{k-1}\Gamma(\nu+1)}{4^{k-1}(k-1)! \Gamma(k+\nu)}[k,q]z^{k-1},\;z\in\mathbb{D}, \end{array} $$ $$ [k,q]:=\frac{1-q^{k}}{1-q}=1+\sum\limits_{j=1}^{k-1}q^{j},\qquad\left[0,q \right]:=0. $$ Using definition formula (4), we will define the next two products: (i) For any non-negative integer k, the q-shifted factorial is given by: $$[k,q]!:=\left\{ \begin{array}{lll} 1, & \text{if} & k=0, \\ \left[1,q\right]\left[2,q\right]\left[3,q\right]\dots[k,q], & \text{if} & k\in\mathbb{N}. \end{array} \right. $$ (ii) For any positive number r, the q-generalized Pochhammer symbol is defined by: $$\left[r,q\right]_{k}:=\left\{ \begin{array}{lll} 1, & \text{if} & k=0, \\ \left[r,q\right]\left[r+1,q\right]\dots\left[r+k-1,q\right], & \text{if} & k\in\mathbb{N}. \end{array} \right. $$ For ν>0,λ>−1, and 0<q<1, define the function \(\mathcal {I} _{\nu,q}^{\lambda }:\mathbb {D}\rightarrow \mathbb {C}\) by: $$\mathcal{I}_{\nu,q}^{\lambda}(z):= z+\sum\limits_{k=2}^{\infty}\frac{ (-1)^{k-1}\Gamma(\nu+1)}{4^{k-1}(k-1)!\Gamma(k+\nu)} \frac{[k,q]!}{ [\lambda+1,q]_{k-1}}z^{k},\;z\in\mathbb{D}. $$ A simple computation shows that: $$\mathcal{I}_{\nu,q}^{\lambda}(z)\times\mathcal{M}_{q,\lambda+1}(z)=z\, \partial_{q}g_{\nu}(z),\;z\in\mathbb{D}, $$ where the function \(\mathcal {M}_{q,\lambda +1}\) is given by: $$\mathcal{M}_{q,\lambda+1}(z):=z+\sum\limits_{k=2}^{\infty}\frac{ [\lambda+1,q]_{k-1}}{[k-1,q]!}z^{k},\;z\in\mathbb{D}. $$ Using the definition of q-derivative along with the idea of convolutions, we introduce the linear operator \(\mathcal {N}_{\nu,q}^{\lambda }:\mathcal {A} \rightarrow \mathcal {A}\) defined by: $$\begin{array}{*{20}l} \mathcal{N}_{\nu,q}^{\lambda}f(z):=\mathcal{I}_{\nu,q}^{\lambda}(z)\times f(z)= z+\sum\limits_{k=2}^{\infty}\psi_{k}a_{k}z^{k},\;z\in\mathbb{D}, \\ (\nu>0,\;\lambda>-1,\;0< q<1), \notag \end{array} $$ $$ \psi_{k}:=\frac{(-1)^{k-1}\Gamma(\nu+1)}{4^{k-1}(k-1)!\Gamma(k+\nu)}\cdot \frac{[k,q]!}{[\lambda+1,q]_{k-1}}. $$ From definition relation (5), we can easily verify that the next relations hold for all \(f\in \mathcal {A}\): (i) \([\lambda +1,q]\mathcal {N}_{\nu,q}^{\lambda }f(z)=[\lambda,q]\mathcal {N} _{\nu,q}^{\lambda +1}f(z) +q^{\lambda }z\partial _{q}\left (\mathcal {N} _{\nu,q}^{\lambda +1}f(z)\right), z\in \mathbb {D}\); (ii) \(\lim \limits _{q\to 1^{-}}\mathcal {N}_{\nu,q}^{\lambda }f(z)= \mathcal {I} _{\nu,1}^{\lambda }\times f(z)=:\mathcal {I}_{\nu }^{\lambda }f(z)=\) \(z+\sum \limits _{k=2}^{\infty }\frac {k!}{(\lambda +1)_{k-1}}\frac {(-1)^{k-1} \Gamma (\nu +1)}{4^{k-1}(k-1)!\Gamma (k+\nu)}\,a_{k}z^{k}, z\in \mathbb {D}\). Now, we define the class of functions \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\) as follows: Let \(\Psi (z):=1+B_{1}z+B_{2}z^{2}+\dots, z\in \mathbb {D }\), with B1>0, be a starlike (univalent) function with respect to 1, which maps the unit disk \(\mathbb {D}\) onto a region included in the right half plane which is symmetric with respect to the real axis. For \(\zeta \in \mathbb {C}^{\ast }\), and 0≤γ<1, the function \(f\in \mathcal {A}\) is said to be in the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\)if the function $$1+\frac{1}{\zeta}\left[\frac{z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z) \right)^{\prime}} {(1-\gamma)\mathcal{N}_{\nu,q}^{\lambda}f(z)+\gamma z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z)\right)^{\prime}}-1\right] $$ is analytic in \(\mathbb {D}\) and satisfies: $$\begin{array}{*{20}l} 1+\frac{1}{\zeta}\left[\frac{z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z) \right)^{\prime}} {(1-\gamma)\mathcal{N}_{\nu,q}^{\lambda}f(z)+\gamma z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z)\right)^{\prime}}-1\right] \prec\Psi(z) \\ \left(\nu>0,\;\lambda>-1,\;0< q<1,\;\zeta\in\mathbb{C}^{\ast},\;0\leq\gamma<1 \right). \end{array} $$ Putting q→1−, we obtain that \(\lim \limits _{q\to 1^{-}}\mathcal {M} _{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)=:\mathcal {G}_{\nu }^{\lambda,\gamma }(\zeta ;\Psi)\), where $$\begin{array}{*{20}l} \mathcal{G}_{\nu}^{\lambda,\gamma}(\zeta;\Psi):= \left\{1+\frac{1}{\zeta} \left[\frac{z\left(\mathcal{I}_{\nu}^{\lambda}f(z)\right)^{\prime}} { (1-\gamma)\mathcal{I}_{\nu}^{\lambda}f(z)+\gamma z\left(\mathcal{I} _{\nu}^{\lambda}f(z)\right)^{\prime}}-1\right] \prec\Psi(z)\right\} \\ \left(\nu>0,\;\lambda>-1,\;\zeta\in\mathbb{C}^{\ast},\;0\leq\gamma<1\right). \end{array} $$ In this paper, we obtain the Fekete-Szegő inequalities for the functions of the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\). We give some application of our results to certain functions defined by convolution products with a normalized analytic function. In particular, Fekete-Szegő inequalities for certain subclasses of functions defined through Poisson distribution are obtained. Fekete-Szegő problem Denoted by \(\mathcal {P}\), the well-known Carathéodory's class of analytic functions in \(\mathbb {D}\), normalized with P(0)=1, and having positive real part in \(\mathbb {D}\), that is ReP(z)>0 for all \(z\in \mathbb {D}\) (see [7]). To prove our results, we need the following two lemmas. [8, Lemma 3] If \(p(z)=1+c_{1}z+c_{2}z^{2}+\dots \in \mathcal {P}\), and α is a complex number, then $$\max\left|c_{2}-\alpha c_{1}^{2}\right|=2\max\{1;\left|2\alpha-1\right|\}. $$ [9, Lemma 1] If \(p(z)=1+c_{1}z+c_{2}z^{2}+\dots \in \mathcal {P}\), then $$\left|c_{2}-\alpha c_{1}^{2}\right|\leq\left\{ \begin{array}{lll} -4\alpha+2, & \text{if} & \alpha\leq 0, \\ 2, & \text{if} & 0\leq\alpha\leq 1, \\ 4\alpha-2, & \text{if} & \alpha\geq 1. \end{array} \right. $$ When α<0 or α>1, the equality holds if and only if \(p(z)= \frac {1+z}{1-z}\) or one of its rotations. If 0<α<1, then the equality holds if and only if \(p(z)=\frac {1+z^{2} }{1-z^{2}}\) or one of its rotations. If α=0, the equality holds if and only if: $$p(z)=\left(\frac{1}{2}+\frac{\lambda}{2}\right)\frac{1+z}{1-z}+\left(\frac{1 }{2}-\frac{\lambda}{2}\right)\frac{1-z}{1+z}, \quad\text{with}\quad 0\leq\lambda\leq 1, $$ or one of its rotations. $$\frac{1}{p(z)}=\left(\frac{1}{2}+\frac{\lambda}{2}\right)\frac{1+z}{1-z} +\left(\frac{1}{2}-\frac{\lambda}{2}\right)\frac{1-z}{1+z}, \quad\text{with} \quad 0\leq\lambda\leq 1. $$ Like it was mentioned in [9, pages 162–163], although the above upper bound is sharp, it can be improved as follows when 0<α<1: $$ \left|c_{2}-\alpha c_{1}^{2}\right|+\alpha\left|c_{1}\right|^{2}\leq 2,\quad \text{if}\quad 0<\alpha\leq\frac{1}{2}, $$ $$ \left|c_{2}-\alpha c_{1}^{2}\right|+(1-\alpha)\left|c_{1}\right|^{2}\leq 2,\quad\text{if}\quad\frac{1}{2}\leq\alpha<1. $$ If the function f given by (1) belongs to the class \( \mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1, and μ is a complex number, then: $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\frac{|\zeta|B_{1}}{2(1-\gamma)\psi_{3}} \cdot \max\left\{1;\left|\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{ 1-\gamma} -\frac{2\mu\zeta B_{1}\psi_{3}}{(1-\gamma)\psi_{2}^{2}} \right|\right\}, $$ where ψk,k∈{2,3}, are given by (6). If \(f\in \mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\), then there exists a Schwarz function w, that is w is analytic in \(\mathbb {D}\), with w(0)=0 and \(\left |w(z)\right |<1, z\in \mathbb {D}\), such that: $$ 1+\frac{1}{\zeta}\left[\frac{z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z) \right)^{\prime}} {(1-\gamma)\mathcal{N}_{\nu,q}^{\lambda}f(z)+\gamma z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z)\right)^{\prime}}-1\right]=\Psi(w(z)),\;z\in \mathbb{D}. $$ Since w is a Schwarz function, it follows that the function p1 defined by: $$ p_{1}(z):=\frac{1+w(z)}{1-w(z)}=1+c_{1}z+c_{2}z^{2}+\dots,\;z\in\mathbb{D}, $$ belongs to \(\mathcal {P}\). Defining the function p by: $$ p(z):=1+\frac{1}{\zeta}\left[\frac{z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z) \right)^{\prime}} {(1-\gamma)\mathcal{N}_{\nu,q}^{\lambda}f(z)+\gamma z\left(\mathcal{N}_{\nu,q}^{\lambda}f(z)\right)^{\prime}}-1\right]= 1+d_{1}z+d_{2}z^{2}+\dots,\;z\in\mathbb{D}, $$ in view of (9) and (10), we have: $$ p(z)=\Psi\left(\frac{p_{1}(z)-1}{p_{1}(z)+1}\right),\;z\in\mathbb{D}. $$ From (10), we easily get: $$\frac{p_{1}(z)-1}{p_{1}(z)+1}=\frac{1}{2}\left[c_{1}z+\left(c_{2}-\frac{ c_{1}^{2}}{2}\right)z^{2}+\left(c_{3}+\frac{c_{1}^{3}}{4} -c_{1}c_{2} \right)z^{3}+\dots\right],\;z\in\mathbb{D}; $$ $$\Psi\left(\frac{p_{1}(z)-1}{p_{1}(z)+1}\right)=1+\frac{1}{2}B_{1}c_{1}z + \left[\frac{1}{2}B_{1}\left(c_{2}-\frac{c_{1}^{2}}{2}\right)+\frac{1}{4} B_{2}c_{1}^{2}\right]z^{2}+\dots,\;z\in\mathbb{D}, $$ and from (12), we obtain: $$ d_{1}=\frac{1}{2}B_{1}c_{1}\qquad\text{and}\qquad d_{2}=\frac{1}{2} B_{1}\left(c_{2}-\frac{c_{1}^{2}}{2}\right)+\frac{1}{4}B_{2}c_{1}^{2}. $$ On the other hand, from (11), according to (5), it follows that $$ d_{1}=\frac{(1-\gamma)a_{2}\psi_{2}}{\zeta}\qquad\text{and}\qquad d_{2}= \frac{2(1-\gamma)a_{3}\psi_{3}}{\zeta}-\frac{(1-\gamma)(1+\gamma)a_{2}^{2} \psi_{2}^{2}}{\zeta}, $$ and combining (13) with (14), we have: $$ a_{2}=\frac{\zeta B_{1}c_{1}}{2(1-\gamma)\psi_{2}}, $$ $$a_{3}=\frac{\zeta B_{1}}{4(1-\gamma)\psi_{3}}\left[c_{2}-\frac{c_{1}^{2}}{2} +\frac{1}{2}\frac{B_{2}}{B_{1}}c_{1}^{2}+\frac{\zeta B_{1}(1+\gamma) c_{1}^{2}}{2(1-\gamma)}\right]. $$ $$ a_{3}-\mu a_{2}^{2}=\frac{\zeta B_{1}}{4(1-\gamma)\psi_{3}} \left(c_{2}-\alpha c_{1}^{2}\right), $$ $$ \alpha=\frac{1}{2}\left[1-\frac{B_{2}}{B_{1}}-\frac{\zeta B_{1}(1+\gamma)}{ 1-\gamma}+\frac{2\mu\zeta B_{1}\psi_{3}} {(1-\gamma)\psi_{2}^{2}}\right], $$ and from Lemma 1, our result follows immediately. □ Putting q→1− in Theorem 1, we obtain the next corollary: Corollary 1 If the function f given by (1) belongs to the class \(\mathcal {G}_{\nu }^{\lambda,\gamma }(\zeta ;\Psi)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1, and μ is a complex number, then $$\begin{array}{*{20}l} \left|a_{3}-\mu a_{2}^{2}\right|\leq \\ \frac{8|\zeta|B_{1}(\lambda+1)_{2}(\nu+1)_{2}}{3(1-\gamma)}\cdot \max\left\{1;\left|\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma} -\frac{3\mu\zeta B_{1}(\lambda+1)(\nu+1)}{2(1-\gamma) (\lambda+2)(\nu+2)} \right|\right\}. \end{array} $$ Using a similar proof like for Theorem 1 combined with Lemma 2, we can obtain the following theorem: If the function f given by (1) belongs to the class \( \mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1 and \(\mu,B_{2}\in \mathbb {R}\), and ζ>0, then $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\left\{ \begin{array}{lll} \frac{\zeta B_{1}}{2(1-\gamma)\psi_{3}}\left[\frac{B_{2}}{B_{1}}+\frac{ \zeta B_{1}(1+\gamma)}{1-\gamma}- \frac{2\mu\zeta B_{1}\psi_{3}}{ (1-\gamma)\psi_{2}^{2}}\right], & \text{if} & \mu\leq\sigma_{1}, \\[1em] \frac{\zeta B_{1}}{2(1-\gamma)\psi_{3}}, & \text{if} & \sigma_{1}\leq\mu \leq\sigma_{2}, \\[1em] \frac{-\zeta B_{1}}{2(1-\gamma)\psi_{3}}\left[\frac{B_{2}}{B_{1}} +\frac{ \zeta B_{1}(1+\gamma)}{1-\gamma}-\frac{2\mu\zeta B_{1}\psi_{3}}{ (1-\gamma)\psi_{2}^{2}}\right], & \text{if} & \mu\geq\sigma_{2}, \end{array} \right. $$ $$ \sigma_{1}=\frac{(1-\gamma)\psi_{2}^{2}}{2\zeta B_{1}\psi_{3}}\left[-1+ \frac{B_{2}}{B_{1}} +\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}\right], $$ $$ \sigma_{2}=\frac{(1-\gamma)\psi_{2}^{2}}{2\zeta B_{1}\psi_{3}}\left[1+\frac{ B_{2}}{B_{1}} +\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}\right], $$ With the same proof like those of Theorem 1, we obtain the equalities (16) and (17) hold. (i) According to the first part of Lemma 2, we have: $$\left|c_{2}-\alpha c_{1}^{2}\right|\leq-4\alpha+2,\;\text{if}\;\alpha\leq0. $$ Using (17), simple computation shows that the inequality α≤0 is equivalent to μ≤σ1, and from (16) combined with the inequality \(\left |c_{2}-\alpha c_{1}^{2}\right |\leq -4\alpha +2\), the first of our theorem is proved. (ii) The second part of Lemma 2 shows that: $$\left|c_{2}-\alpha c_{1}^{2}\right|\leq2,\;\text{if}\;0\leq\alpha\leq1. $$ From (17), it is easy to check that the inequality 0≤α≤1 is equivalent to σ1≤μ≤σ2. From the relation (16), the inequality \(\left |c_{2}-\alpha c_{1}^{2}\right |\leq 2\) proves the second part of our result. (iii) Finally, form the third part of Lemma 2, we have: $$\left|c_{2}-\alpha c_{1}^{2}\right|\leq4\alpha-2,\;\text{if}\;\alpha\geq1. $$ The relation (17) shows immediately that α≥1 is equivalent to μ≥σ2, while (16) combined with the inequality \(\left |c_{2}-\alpha c_{1}^{2}\right |\leq 4\alpha -2\) proves the last part of our result. □ Taking q→1− in Theorem 2, we get the next special case: If the function f given by (1) belongs to the class \(\mathcal {G}_{\nu }^{\lambda,\gamma }(\zeta ;\Psi)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1 and \(\mu,B_{2}\in \mathbb {R}\), and ζ>0, then $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\left\{ \begin{array}{l} \frac{8\zeta B_{1}(\lambda+1)_{2}(\nu+1)_{2}} {3(1-\gamma)}\left[\frac{ B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma} -\frac{3\mu\zeta B_{1}(\lambda+1)(\nu+1)}{2(1-\gamma)(\lambda+2)(\nu+2)}\right], \\[1em] \hfill\text{if}\quad\mu\leq\eta_{1}, \\[1em] \frac{8\zeta B_{1}(\lambda+1)_{2}(\nu+1)_{2}} {3(1-\gamma)},\hfill\text{if} \quad \eta_{1}\leq\mu\leq\eta_{2}, \\[1em] \frac{-8\zeta B_{1}(\lambda+1)_{2}(\nu+1)_{2}} {3(1-\gamma)}\left[\frac{ B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma} -\frac{3\mu\zeta B_{1}(\lambda+1)(\nu+1)}{2(1-\gamma)(\lambda+2)(\nu+2)}\right], \\[1em] \hfill\text{if}\quad\mu\geq\eta_{2}, \end{array} \right. $$ $$ \eta_{1}=\frac{2(1-\gamma)(\lambda+2)(\nu+2)}{3\zeta B_{1}(\lambda+1)(\nu+1)} \left[-1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{(1-\gamma)} \right], $$ $$ \eta_{2}=\frac{2(1-\gamma)(\lambda+2)(\nu+2)}{3\zeta B_{1}(\lambda+1)(\nu+1)} \left[1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{(1-\gamma)}\right]. $$ With a similar proof like for Theorem 1 and using the inequalities (7) and (8), we obtained the next result. If the function f given by (1) belongs to the class \( \mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1 and \(\mu,B_{2}\in \mathbb {R}\), and ζ>0, then the next inequalities hold: (i) for σ1<μ≤σ3, we have $$ \left|a_{3}-\mu a_{2}^{2}\right|+\frac{(1-\gamma)\psi_{2}^{2}} {2\zeta B_{1}\psi_{3}}\left[1-\frac{B_{2}}{B_{1}}-\frac{\zeta B_{1}(1+\gamma)}{ 1-\gamma}+\frac{2\mu\zeta B_{1}\psi_{3}} {(1-\gamma)\psi_{2}^{2}}\right] \left|a_{2}\right|^{2}\leq\frac{\zeta B_{1}}{2(1-\gamma)\psi_{3}}; $$ (ii) for σ3≤μ≤σ2, we have $$ \left|a_{3}\!-\mu a_{2}^{2}\right|+\frac{(1-\gamma)\psi_{2}^{2}} {2\zeta B_{1}\psi_{3}}\left[1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{ 1-\gamma}-\frac{2\mu\zeta B_{1}\psi_{3}} {(1-\gamma)\psi_{2}^{2}}\right] \left|a_{2}\right|^{2}\leq\frac{\zeta B_{1}}{2(1-\gamma)\psi_{3}}, $$ where σ1 and σ2 are defined by (18) and (19), respectively, $$\sigma_{3}=\frac{(1-\gamma)\psi_{2}^{2}}{2\zeta B_{1}\psi_{3}} \left[\frac{ B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{(1-\gamma)}\right], $$ and ψk,k∈{2,3}, are given by (6). With the same computations like in the proof of Theorem 1, we obtain the relations (16) and (17), while (15) is equivalent to: $$ c_{1}=\frac{2(1-\gamma)\psi_{2}}{\zeta B_{1}}. $$ (i) To prove the first part of our theorem, we will use the inequality (7). Thus, according to (16), (17), and the above relation, it is easy to check that (7) could be written in the equivalent form (22), while the assumption \(0<\alpha \leq \frac {1 }{2}\) is equivalent to σ1<μ≤σ3. (ii) For the proof of the second part of our result, we will use the inequality (8). From (16), (17), and (24), it follows that (8) could be written in the form (23), and the assumption \(\frac {1}{2}\leq \alpha <1\) is equivalent to σ3<μ≤σ2. □ Putting q→1− in Theorem 3, we obtain the following result: If the function f given by (1) belongs to the class \(\mathcal {G}_{\nu }^{\lambda,\gamma }(\zeta ;\Psi)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1 and \(\mu,B_{2}\in \mathbb {R}\), and ζ>0, then the next inequalities hold: (i) for η1<μ≤η3, we have $$\begin{array}{*{20}l} \left|a_{3}-\mu a_{2}^{2}\right| \\ +\frac{2(1-\gamma)(\lambda+2)(\nu+2)} {3\zeta B_{1}(\lambda+1)(\nu+1)}\left[ 1-\frac{B_{2}}{B_{1}} -\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}+\frac{ 3\mu\zeta B_{1}(\lambda+1)(\nu+1)}{2(1-\gamma) (\lambda+2)(\nu+2)}\right] \left|a_{2}\right|^{2} \\ \leq\frac{8\zeta B_{1}(\lambda+1)_{2}(\nu+1)_{2}}{3(1-\gamma)}; \end{array} $$ (ii) for η3≤μ≤η2, we have $$\begin{array}{*{20}l} \left|a_{3}-\mu a_{2}^{2}\right| \\ +\frac{2(1-\gamma)(\lambda+2)(\nu+2)}{3\zeta B_{1} (\lambda+1)(\nu+1)}\left[ 1+\frac{B_{2}}{B_{1}} +\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}-\frac{ 3\mu\zeta B_{1}(\lambda+1)(\nu+1)}{2(1-\gamma) (\lambda+2)(\nu+2)}\right] \left|a_{2}\right|^{2} \\ \leq\frac{8\zeta B_{1}(\lambda+1)_{2}(\nu+1)_{2}}{3(1-\gamma)}, \end{array} $$ where η1 and η2 are defined by (20) and (21), respectively, and $$\eta_{3}=\frac{2(1-\gamma)(\lambda+2)(\nu+2)} {3\zeta B_{1}(\lambda+1)(\nu+1) }\left[\frac{B_{2}}{B_{1}} +\frac{\zeta B_{1}(1+\gamma)}{(1-\gamma)}\right]. $$ Applications to functions defined by poisson distribution In [10], Porwal studied a power series whose coefficients are probabilities of the Poisson distribution, that is: $$\mathrm{I}_{m}(z)=z+\sum\limits_{k=2}^{\infty}\frac{m^{k-1}}{(k-1)!} e^{-m}z^{k},\;z\in\mathbb{D},\quad(m>0), $$ and motivated by this investigation Srivastava and Porwal [11] introduced the linear operator \(\mathcal {I}^{m}:\mathcal {A}\rightarrow \mathcal {A}\) defined by: $$\mathcal{I}^{m}f(z):=\mathrm{I}_{m}(z)\times f(z)=z+\sum\limits_{k=2}^{\infty} \frac{m^{k-1}}{(k-1)!}e^{-m}a_{k}z^{k},\;z\in\mathbb{D}, $$ where \(f\in \mathcal {A}\) has the form (1). Let the function Ψ satisfying the conditions of Definition 1. For \(\zeta \in \mathbb {C}^{\ast }, 0\leq \gamma <1\), and \(k\in \mathcal {A}\), the function \(f\in \mathcal {A}\) is said to be in the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;k; \Psi \right)\) if \(f\times k\in \mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\), that is" $$1+\frac{1}{\zeta}\left[\frac{z\left(\mathcal{N}_{\nu,q}^{\lambda}(f\times k)(z)\right)^{\prime}} {(1-\gamma)\mathcal{N}_{\nu,q}^{\lambda}(f\times k)(z) +\gamma z\left(\mathcal{N}_{\nu,q}^{\lambda}(f\times k)(z)\right)^{\prime}}-1 \right] $$ is analytic in \(\mathbb {D}\) and satisfies $$\begin{array}{*{20}l} 1+\frac{1}{\zeta}\left[\frac{z\left(\mathcal{N}_{\nu,q}^{\lambda}(f\times k)(z)\right)^{\prime}} {(1-\gamma)\mathcal{N}_{\nu,q}^{\lambda}(f\times k)(z) +\gamma z\left(\mathcal{N}_{\nu,q}^{\lambda}(f\times k)(z)\right)^{\prime}}-1 \right]\prec\Psi(z) \\ \left(\nu>0,\;\lambda>-1,\;0< q<1,\;\zeta\in\mathbb{C}^{\ast},\;0\leq\gamma<1 \right). \end{array} $$ A special case of the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;k;\Psi \right)\) is obtained for k=Im; hence, \(f\in \mathcal { M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;\mathrm {I}_{m};\Psi \right)\) if and only if \(\mathcal {I}^{m}f\in \mathcal {M}_{\nu,q}^{\lambda,\gamma }(\zeta ;\Psi)\). Applying Theorems 1 and 2 for the function f×k given by (3), we get the following results, respectively: If the function f given by (1) belongs to the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;k;\Psi \right)\), with \( \Psi (z)=1+B_{1}z+B_{2}z^{2}+\dots, k\in \mathcal {A}\) is given by (2) with b2b3≠0, and μ is a complex number, then $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\frac{|\zeta|B_{1}}{2(1-\gamma)|b_{3}| \psi_{3}}\cdot \max\left\{1,\left|\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)} {1-\gamma}-\frac{2\mu\zeta B_{1}b_{3}\psi_{3}}{(1-\gamma) b_{2}^{2}\psi_{2}^{2}}\right|\right\}, $$ where ψk and k∈{2,3} are given by (6). If the function f given by (1) belongs to the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;k;\Psi \right)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1 and \(\mu,B_{2}\in \mathbb {R}, k\in \mathcal {A}\) is given by (2) with b2b3≠0, and ζ>0, then $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\left\{ \begin{array}{lll} \frac{\zeta B_{1}}{2(1-\gamma)|b_{3}|\psi_{3}}\left[\frac{B_{2}}{B_{1}} + \frac{\zeta B_{1}(1+\gamma)}{(1-\gamma)} -\frac{2\mu\zeta B_{1}b_{3}\psi_{3}}{(1-\gamma)b_{2}^{2}\psi_{2}^{2}}\right], & \text{if} &\mu\leq\sigma_{1}, \\[1em] \frac{\zeta B_{1}}{2(1-\gamma)|b_{3}|\psi_{3}}, & \text{if} &\sigma_{1}\leq\mu\leq\sigma_{2}, \\[1em] \frac{-\zeta B_{1}}{2(1-\gamma)|b_{3}|\psi_{3}}\left[\frac{B_{2}}{B_{1}} + \frac{\zeta B_{1}(1+\gamma)}{(1-\gamma)} -\frac{2\mu\zeta B_{1}b_{3}\psi_{3}}{(1-\gamma)b_{2}^{2}\psi_{2}^{2}}\right], & \text{if} &\mu\geq\sigma_{2}, \end{array} \right. $$ $$\sigma_{1}=\frac{(1-\gamma) b_{2}^{2}\psi_{2}^{2}}{2\zeta B_{1}b_{3}\psi_{3}} \left[-1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}\right], $$ $$\sigma_{2}=\frac{(1-\gamma) b_{2}^{2}\psi_{2}^{2}}{2\zeta B_{1}b_{3}\psi_{3}} \left[1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}\right], $$ For k:=Im, we have $$b_{2}=me^{-m}\quad\text{and}\quad b_{3}=\frac{m^{2}}{2}e^{-m}, $$ and for this special case from Theorems 4 and 5, we deduce to the following results, respectively: If the function f given by (1) belongs to the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;\mathrm {I}_{m};\Psi \right)\), with Ψ(z)=1+B1z+B2z2+…, and μ is a complex number, then $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\frac{|\zeta|B_{1}}{(1- \gamma)m^{2}e^{-m}\psi_{3}}\cdot \max\left\{1;\left|\frac{B_{2}}{B_{1}}+ \frac{\zeta B_{1}(1+\gamma)}{1-\gamma} -\frac{\mu\zeta B_{1}\psi_{3}}{ (1-\gamma)e^{-m}\psi_{2}^{2}}\right|\right\}, $$ If the function f given by (1) belongs to the class \(\mathcal {M}_{\nu,q}^{\lambda,\gamma }\left (\zeta ;\mathrm {I}_{m};\Psi \right)\), with Ψ(z)=1+B1z+B2z2+… satisfying the conditions of Definition 1 and \(\mu,B_{2}\in \mathbb {R}\), and ζ>0, then $$\left|a_{3}-\mu a_{2}^{2}\right|\leq\left\{ \begin{array}{l} \frac{\zeta B_{1}}{(1-\gamma)m^{2}e^{-m}\psi_{3}} \left[\frac{B_{2}}{B_{1}} +\frac{\zeta B_{1}(1+\gamma)} {(1-\gamma)}-\frac{\mu\zeta B_{1}\psi_{3}}{ (1-\gamma) e^{-m}\psi_{2}^{2}}\right],\;\text{if}\quad\mu\leq\sigma_{1}^{ \ast}, \\[1em] \frac{\zeta B_{1}}{(1-\gamma)m^{2}e^{-m}\psi_{3}},\hfill\text{if} \quad\sigma_{1}^{\ast}\leq\mu\leq\sigma_{2}^{\ast}, \\[1em] \frac{-\zeta B_{1}}{(1-\gamma)m^{2}e^{-m}\psi_{3}}\left[\frac{B_{2}}{B_{1}} +\frac{\zeta B_{1}(1+\gamma)} {(1-\gamma)}-\frac{\mu\zeta B_{1}\psi_{3}}{ (1-\gamma)e^{-m}\psi_{2}^{2}}\right],\;\text{if}\quad\mu\geq\sigma_{2}^{ \ast}, \end{array} \right. $$ $$\sigma_{1}^{\ast}=\frac{(1-\gamma)e^{-m}\psi_{2}^{2}}{\zeta B_{1}\psi_{3}} \left[-1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}\right], $$ $$\sigma_{2}^{\ast}=\frac{(1-\gamma)e^{-m}\psi_{2}^{2}}{\zeta B_{1}\psi_{3}} \left[1+\frac{B_{2}}{B_{1}}+\frac{\zeta B_{1}(1+\gamma)}{1-\gamma}\right], $$ Bulboacă, T.: Differential subordinations and superordinations. Recent Results, House of Scientific Book Publ., Cluj-Napoca (2005). Miller, S. S., Mocanu, P. T.: Differential subordinations: theory and applications, Series on Monographs and Textbooks in Pure and Applied Mathematics, Vol. 225. Marcel Dekker Inc., New York and Basel (2000). Szász, R., Kupán, P. A.: About the univalence of the Bessel functions, Studia Univ. Babeş-Bolyai Math. 54(1), 127–132 (2009). Baricz, Á.: Geometric propertis of generalized Bessel functions. Publ. Math. Debr. 73, 155–178 (2008). Jackson, F. H.: The application of basic numbers to Bessel's and Legendre's functions. Proc. Lond. Math. Soc. 3(2), 1–23 (1905). Selvakumaran, K. A., Szász, R.: Certain geometric properties of an integral operator involving Bessel functions. Kyungpook Math. J. 58, 507–517 (2018). Carathéodory, C.: Über den Variabilitätsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen. Math. Ann. 64, 95–115 (1907). Libera, R. J., Zlotkiewicz E.J.: Coefficient bounds for the inverse of a function with derivative in \(\mathcal {P}\). Proc. Amer. Math. Soc. 87(2), 251–257 (1983). Ma, W., Minda, D.: A unified treatment of some special classes of univalent functions. In: Li, Z., Ren, F., Lang, L., Zhang, S. (eds.)Proceedings of the Conference on Complex Analysis, Tianjin, 1992, pp. 157–169. Conf. Proc. Lecture Notes Anal. I, Int. Press, Cambridge (1994). Porwal, S.: An application of a Poisson distribution series on certain analytic functions. J. Complex Anal., 1–3 (2014). Art. ID 984135. https://doi.org/10.1155/2014/984135. Srivastava, D., Porwal, S.: Some sufficient conditions for Poisson distribution series associated with conic regions. Int. J. Adv. Technol. Eng. Sci. 3(1), 229–235 (2015). The authors are grateful to the reviewer of this article, that gave valuable remarks, comments, and advices, in order to revise and improve the results of the paper. Faculty of Science, Damietta University, New Damietta, Egypt. Department of Mathematics, Faculty of Science, Damietta University, New Damietta, 34517, Egypt Sheza M. El-Deeb Department of Mathematics, Faculty of Science and Arts in Badaya, Al-Qassim University, Al- Badaya, Al-Qassim, Kingdom of Saudi Arabia Faculty of Mathematics and Computer Science, Babeş-Bolyai University, Cluj-Napoca, 400084, Romania Teodor Bulboacă Correspondence to Sheza M. El-Deeb. M. El-Deeb, S., Bulboacă, T. Fekete-Szegő inequalities for certain class of analytic functions connected with q-analogue of Bessel function. J Egypt Math Soc 27, 42 (2019). https://doi.org/10.1186/s42787-019-0049-2 Fekete-Szegő inequality Differential subordination Bessel function of first kind q-derivative
CommonCrawl
Dynamics of a stochastic eutrophication-chemostat model with impulsive dredging and pulse inputting on environmental toxicant Jianjun Jiao ORCID: orcid.org/0000-0002-4208-95981 & Qiuhua Li2 In this paper, we present a stochastic eutrophication-chemostat model with impulsive dredging and pulse inputting on environmental toxicant. The sufficient condition for the extinction of microorganisms is obtained. The sufficient condition for the investigated system with unique ergodic stationary distribution is also obtained. The results show that the stochastic noise, impulsive dredging, and pulse input on the environmental toxicant play important roles in the extinction of microorganisms. The results also indicate the effective and reliable controlling strategy for water resource management. Finally, numerical simulations are employed to illustrate our results. The chemostat is a device for continuous and impulsive cultures of microorganisms in laboratory [1–3]. Impulsive differential equations are found in almost every domain of applied science and have been studied in many investigations [4, 5]. With the development of society, the increasing amount of toxicants and contaminants have entered ecological systems. Environmental pollution has become one of the most important society-ecological problems. Therefore, it is very important to study the effects of toxicants on a population or community. Specially, the toxicant and abundant microorganisms in the water pollution environment are also a threat to the water resource management. Consequently, it is important to discuss chemostat models in a polluted environment [6, 7]. Zhou et al. [8] considered that reservoir dredging is the main and effective way to improve water quality by using a physical method. However, it is well known that many real-world systems may be disturbed by stochastic factors. Population systems are often subjected to various types of environmental noise. In ecology, it is critical to discover whether the presence of this noise has significant effects on population systems. Mao [9, 10] investigated stochastic differential equations and their applications. Lv et al. [11] presented an impulsive stochastic chemostat model with nonlinear perturbation. Inspired by the above discussion, we consider a stochastic eutrophication-chemostat model with impulsive dredging and pulse inputting on environmental toxicant: $$ \left \{ \textstyle\begin{array}{@{}l} \left . \textstyle\begin{array}{@{}l} dx(t)=[D(x_{0}-x(t)) \\ \hphantom{dx(t)={}}{}-\frac{ \beta x(t)y(t)}{k(A+x(t)+By(t))}]\,dt \\ \hphantom{dx(t)={}}{}+x(t)(\sigma _{11}+\sigma _{12}x(t))\,dB_{1}(t), \\ dy(t)=[\frac{\beta x(t)y(t)}{A+x(t)+By(t)} \\ \hphantom{dy(t)={}}{}-Dy(t)-rc_{o}(t)y(t)]\,dt \\ \hphantom{dy(t)={}}{}+y(t)(\sigma _{21}+\sigma _{22}y(t))\,dB_{2}(t), \\ dc_{o}(t)=(fc_{e}(t)-(g+m)c_{o}(t))\,dt, \\ dc_{e}(t)=(-hc_{e}(t))\,dt, \end{array}\displaystyle \right \}\quad t \neq (n+l)\tau , t\neq (n+1)\tau , \\ \left . \textstyle\begin{array}{@{}l} \triangle x(t)=0, \\ \triangle y(t)=-h_{1}y(t), \\ \triangle c_{o}(t)=0, \\ \triangle c_{e}(t)=-h_{2}c_{e}(t), \end{array}\displaystyle \right \}\quad t= (n+l)\tau ,n\in Z^{+}, \\ \left . \textstyle\begin{array}{@{}l} \triangle x(t)=0, \\ \triangle y(t)=0, \\ \triangle c_{o}(t)=0, \\ \triangle c_{e}(t)=\mu , \end{array}\displaystyle \right \}\quad t= (n+1)\tau ,n\in Z^{+}, \end{array}\displaystyle \right . $$ where \(x(t)\) is the concentration of the nutrient in a lake at time t. \(y(t)\) is the concentration of the microorganism in a lake at time t. \(c_{o}(t)\) is the concentration of the toxicant in the organism of the microorganism in a lake at time t. \(c_{e}(t)\) is the concentration of the toxicant in a lake at time t. D denotes the input rate from the lakes containing the nutrient and the wash-out rate of nutrients and microorganisms from the lake. \(\beta >0\) is the uptake constant of the nutrient. \(\frac{x(t)}{A+x(t)+By(t)}\) is a functional response of the Beddington–DeAngelis type. \(k>0\) is the yield of the microorganism y per unit mass of the nutrient. \(A>0\) and \(B>0\) are the saturating parameters of the Beddington–DeAngelis functional response. \(r>0\) is the depletion rate coefficient of the microorganism y due to the microorganism organismal toxicant. \(f>0\) is the coefficient of the population organism's net uptake of toxicant from the environment in a lake. \(-g<0\) and \(-m<0\), respectively, represent coefficients of the elimination and depuration rates of the toxicant in the organism in a lake. \(-h<0\) is the coefficient of the totality of toxicant losses from the system environment in a lake, including processes such as biological transformation, chemical hydrolysis, volatilization, microbial degradation, and photosynthetic degradation. τ is the period of impulsive dredging or the pulse input environmental toxin. \(0< h_{1}<1\) is the effect of impulsive dredging microorganism at time \(t=(n+l)\) (\(0< l<1\)). \(0< h_{2}<1\) is the effect of impulsive dredging environmental toxicant at time \(t=(n+l)\) (\(0< l<1\)). \(\mu \geq 0 \) is the amount of pulse input of environmental toxin concentration in a lake at \(t=(n+1)\tau \), \(n\in Z^{+}\), and \(Z^{+}=\{1,2,\ldots \}\). The lemmas In this paper, \((\varOmega , \mathscr{F},\mathscr{F}_{t\geq 0},P)\) stands for a complete probability space with filtration \(\mathscr{F}_{t\geq 0}\) satisfying the usual conditions. Define \(f^{l}=\inf_{t\in R_{+}}f(t)\), \(f(t)\) is a bounded function on \([0,+\infty )\), \(\langle f(t)\rangle =\frac{1}{t}\int _{0}^{t} f(s) \,ds\), where \(f(t)\) is an integrable function on \([0,+\infty )\). Consider the subsystem of system (2.1) as follows: $$ \left \{ \textstyle\begin{array}{@{}l} \left . \textstyle\begin{array}{@{}l} dc_{o}(t)=(fc_{e}(t)-(g+m)c_{o}(t))\,dt, \\ dc_{e}(t)=(-hc_{e}(t))\,dt, \end{array}\displaystyle \right \}\quad t \neq (n+l)\tau , t\neq (n+1)\tau , \\ \left . \textstyle\begin{array}{@{}l} \triangle c_{o}(t)=0, \\ \triangle c_{e}(t)=-h_{2}c_{e}(t), \end{array}\displaystyle \right \}\quad t= (n+l)\tau ,n\in Z^{+}, \\ \left . \textstyle\begin{array}{@{}l} \triangle c_{o}(t)=0, \\ \triangle c_{e}(t)=\mu , \end{array}\displaystyle \right \}\quad t= (n+1)\tau ,n\in Z^{+}. \end{array}\displaystyle \right . $$ With regard to system (3.1), we have the following equations with integrating and solving the first two equations of system (3.1) between pulses: $$ \left \{ \textstyle\begin{array}{@{}l} c_{o}(t)= \textstyle\begin{cases} c_{o}(n\tau ^{+})e^{-(g+m)(t-n\tau )} + \frac{ fc_{e}(n\tau ^{+})(1-e^{-(h-g-m)(t-n\tau )})}{(h-g-m)}, \\ \quad t\in (n\tau , (n+l)\tau ], \\ c_{o}((n+l)\tau ^{+})e^{-(g+m)(t-(n+l)\tau )} \\ \quad {}+ \frac{ fc_{e}((n+l)\tau ^{+})(1-e^{-(h-g-m)(t-(n+l)\tau )})}{(h-g-m)}, \\ \quad t\in ((n+l)\tau ,(n+1)\tau ], \end{cases}\displaystyle \\ c_{e}(t)= \textstyle\begin{cases} c_{e}(n\tau ^{+})e^{-h(t-n\tau )},& t\in (n\tau ,(n+l) \tau ], \\ c_{e}((n+l)\tau ^{+})e^{-h(t-(n+l)\tau )},&t\in ((n+l) \tau ,(n+1)\tau ]. \end{cases}\displaystyle \end{array}\displaystyle \right . $$ The stroboscopic map of system (3.1) is obtained by the last two equations of system (3.1): $$ \left \{ \textstyle\begin{array}{@{}l} c_{o}((n+1)\tau ^{+})=c_{o}(n\tau ^{+})e^{-(g+m)\tau } \\ \hphantom{c_{o}((n+1)\tau ^{+})={}} {}+ \frac{ c_{e}(n\tau ^{+})f(e^{-(g+m)(1-l)\tau }-e^{-(h-g-m)l\tau -(1-l)\tau })}{(h-1)} \\ \hphantom{c_{o}((n+1)\tau ^{+})={}} {}+ \frac{ (1-h_{2})c_{e}(n\tau ^{+})f(e^{-hl\tau }-e^{-(h-g-m)\tau )})}{(h-g-m)}, \\ c_{e}((n+1)\tau ^{+})=(1-h_{2})e^{-h\tau }c_{e}(n\tau ^{+})+ \mu . \end{array}\displaystyle \right . $$ We can easily have a unique fixed point \((c_{o}^{\ast },c_{e}^{\ast })\) of system (3.3) as follows: $$ \left \{ \textstyle\begin{array}{@{}l} c_{o}^{\ast }=\frac{\mu f}{1-e^{-(g+m)\tau }}\times [ \frac{ (e^{-(g+m)(1-l)\tau }-e^{-(h-g-m)l\tau -(1-l)\tau })}{(h-g-m)(1-(1-h_{2})e^{-h\tau })} \\ \hphantom{c_{o}^{\ast }={}}{}+ \frac{ (1-h_{2})(e^{-h_{l}\tau }-e^{-(h-g-m)\tau )})}{(h-g-m)(1-(1-h_{2})e^{-h\tau })}], \\ c_{e}^{\ast }=\frac{\mu }{1-(1-h_{2})e^{-h\tau }}. \end{array}\displaystyle \right . $$ The unique fixed point \((c_{o}^{\ast },c_{e}^{\ast })\) of system (3.3) is globally asymptotically stable for the eigenvalues of the coefficient matrix of system (3.3) $$ \begin{pmatrix} e^{-\tau } & \star \\ 0& (1-h_{2})e^{-h\tau } \end{pmatrix} $$ are less 1, there is no need for calculating ⋆. Similar to Lemma 3.3 in reference [6], we can obtain the following lemma. Lemma 3.1 System (3.1) has a unique positive τ-periodic solution \((\widetilde{c_{o}(t)},\widetilde{c_{e}(t)})\), which is also globally asymptotically stable, \(\widetilde{c_{o}(t)}\)and \(\widetilde{c_{e}(t)}\)are defined as follows: $$ \left \{ \textstyle\begin{array}{@{}l} \widetilde{c_{o}(t)}= \textstyle\begin{cases} c_{o}^{\ast }e^{-(g+m)(t-n\tau )} + \frac{ fc_{e}^{\ast }(1-e^{-(h-g-m)(t-n\tau )})}{(h-g-m)}, \\ \quad t\in (n\tau , (n+l)\tau ], \\ c_{o}^{\ast \ast }e^{-(g+m)(t-(n+l)\tau )}+ \frac{ fc_{e}^{\ast \ast }(1-e^{-(h-g-m)(t-(n+l)\tau )})}{(h-g-m)}, \\ \quad t\in ((n+l)\tau ,(n+1)\tau ], \end{cases}\displaystyle \\ \widetilde{c_{e}(t)}= \textstyle\begin{cases} c_{e}^{\ast }e^{-h(t-n\tau )},& t\in (n\tau ,(n+l)\tau ], \\ c_{e}^{\ast \ast }e^{-h(t-(n+l)\tau )},&t\in ((n+l)\tau ,(n+1) \tau ], \end{cases}\displaystyle \end{array}\displaystyle \right . $$ where \(c_{o}^{\ast }\), \(c_{e}^{\ast }\)are defined as (3.4), and $$ \left \{ \textstyle\begin{array}{@{}l} c_{o}^{\ast \ast }=c_{o}^{\ast }e^{-(g+m)l\tau }+ \frac{ fc_{e}^{\ast }(1-e^{-(h-g-m)l\tau })}{(h-g-m)}, \\ c_{e}^{\ast \ast }=(1-h_{2})e^{-hl\tau }c^{\ast }_{e}. \end{array}\displaystyle \right . $$ Remark 3.2 For any positive solution \((c_{o}(t),c_{e}(t))\) of system (3.1) with the initial value \((c_{o}(0),c_{e}(0))\in R^{+}_{2}\), we can obtain $$ \begin{aligned}[b] &\lim_{t\rightarrow +\infty }\bigl\langle c_{o}(t)\bigr\rangle \\ &\quad =\frac{c^{\ast }_{o}(1-e^{-(g+m)l\tau })}{(g+m)\tau } \\ &\qquad {} +\frac{flc^{\ast }_{e}}{h-g-m}- \frac{fc^{\ast }_{e}(1-e^{-(h-g-m)l\tau })}{(h-g-m)^{2}\tau } \\ &\qquad {}+ \frac{c^{\ast \ast }_{o}(e^{-(g+m)l\tau }-e^{-(g+m)\tau })}{\tau } \\ &\qquad {} +\frac{f(1-l)c^{\ast \ast }_{e}}{h-g-m}- \frac{fc^{\ast \ast }_{e}(e^{-(h-g-m)l\tau }-e^{-(h-g-m)\tau })}{(h-g-m)^{2}\tau } \stackrel{\Delta }{=} \widetilde{c_{0}}, \end{aligned} $$ where \(c^{\ast }_{o}\) and \(c^{\ast }_{e}\) are defined as (3.4), and \(c^{\ast \ast }_{o}\) and \(c^{\ast \ast }_{e}\) are defined as (3.6). For convenience, we consider the following notation: $$ \tau _{n}=n\tau ,\qquad \tau _{n+l}=(n+l)\tau ,\qquad h_{n+l}=h_{1}. $$ Define \((x(t),y(t))\) and \((w(t),z(t))\) are the solutions of the subsystem of system (2.1), respectively: $$ \left \{ \textstyle\begin{array}{@{}l} \left . \textstyle\begin{array}{@{}l} dx(t)=[D(x_{0}-x(t)) \\ \hphantom{dx(t)={}}{}-\frac{\beta x(t)y(t)}{k(A+x(t)+By(t))}]\,dt \\ \hphantom{dx(t)={}}{} +x(t)(\sigma _{11}+\sigma _{12}x(t))\,dB_{1}(t), \\ dy(t)=[\frac{\beta x(t)y(t)}{A+x(t)+By(t)} \\ \hphantom{dy(t)={}}{} -Dy(t)-rc_{o}(t)y(t)]\,dt \\ \hphantom{dy(t)={}}{} +y(t)(\sigma _{21}+\sigma _{22}y(t))\,dB_{2}(t), \end{array}\displaystyle \right \} \quad t\neq (n+l)\tau , \\ \left . \textstyle\begin{array}{@{}l} \triangle x(t)=0, \\ \triangle y(t)=-h_{1}y(t), \end{array}\displaystyle \right \}\quad t= (n+l)\tau ,n\in Z^{+}, \end{array}\displaystyle \right . $$ and the following SDE without impulsive perturbations: $$ \left \{ \textstyle\begin{array}{@{}l} dw(t)=[D(x_{0}-w(t)) \\ \hphantom{dw(t)={}}{}- \frac{\beta w(t)\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)}{k(A+w(t)+B\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t))}]\,dt \\ \hphantom{dw(t)={}}{} +w(t)(\sigma _{11}+\sigma _{12}w(t))\,dB_{1}(t), \\ dz(t)=[ \frac{ \beta w(t)z(t)}{A+w(t)+B\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)} \\ \hphantom{dz(t)={}}{} -Dz(t)-rc_{o}(t)z(t)]\,dt \\ \hphantom{dz(t)={}}{} +z(t)[\sigma _{21}+\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)]\,dB_{2}(t), \end{array}\displaystyle \right . $$ with the initial value \(w(0)=x(0)\) and \(z(0)=y(0)\). The solutions \((x(t),y(t))\)of the subsystem of system (2.1) can also be expressed as follows: $$ \left \{ \textstyle\begin{array}{@{}l} x(t)=w(t), \\ y(t)=\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t), \end{array}\displaystyle \right . $$ where \((w(t),z(t))\)is the solution of (3.10). One can find that \((x(t),y(t))\) is continuous on the interval \((\tau _{n},\tau _{n+l})\), and for \(t\neq \tau _{n+l}\), $$ \left \{ \textstyle\begin{array}{@{}l} dx(t)=dw(t) \\ \hphantom{dx(t)}=[D(x_{0}-w(t)) - \frac{ \beta w(t)\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)}{k(A+w(t)+B\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t))}]\,dt \\ \hphantom{dx(t)={}}{} +w(t)(\sigma _{11}+\sigma _{12}w(t))\,dB_{1}(t) \\ \hphantom{dx(t)} =[D(x_{0}-w(t)) -\frac{\beta w(t)y(t)}{k(A+w(t)+By(t))}]\,dt \\ \hphantom{dx(t)={}}{} +w(t)(\sigma _{11}+\sigma _{12}w(t))\,dB_{1}(t), \\ dy(t)=\prod_{0< \tau _{n+l}< t}(1-h_{n+l})dz(t) \\ \hphantom{dy(t)} =\prod_{0< \tau _{n+l}< t}(1-h_{n+l})\{ [ \frac{\beta w(t)z(t)}{A+w(t)+B\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)} \\ \hphantom{dy(t)={}}{} -Dz(t)-rc_{o}(t)z(t)]\,dt \\ \hphantom{dy(t)={}}{} +z(t)[\sigma _{21}+\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)]\,dB_{2}(t) \} \\ \hphantom{dy(t)} =[ \frac{\beta w(t)\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)}{A+w(t)+B\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)} \\ \hphantom{dy(t)={}}{} -D\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)-rc_{o}(t)\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)]\,dt \\ \hphantom{dy(t)={}}{} +\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)[\sigma _{21}+\sigma _{22} \prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)]\,dB_{2}(t) \\ \hphantom{dy(t)} =[\frac{\beta w(t)y(t)}{A+w(t)+By(t)} -Dy(t)-rc_{o}(t)y(t)]\,dt \\ \hphantom{dy(t)={}}{} +y(t)[\sigma _{21}+\sigma _{22}y(t)]\,dB_{2}(t). \end{array}\displaystyle \right . $$ For every \(n\in N\), and \(\tau _{n+l}\in [0,+\infty )\), $$ \begin{aligned}[b] y\bigl(\tau ^{+}_{n+l} \bigr)&=\lim_{t\rightarrow \tau ^{+}_{n+l}}\prod_{0< \tau _{j}< t}(1-h_{j)}z(t) \\ &=\prod_{0< \tau _{j}\leq \tau _{n+l}}(1-h_{\tau _{j})}z\bigl(\tau ^{+}_{n+l}\bigr) \\ &=(1-h_{\tau _{n+l})}\prod_{0< \tau _{j}< \tau _{n+l}}(1-h_{\tau _{j})}z( \tau _{n+l}) \\ &=(1-h_{\tau _{n+l})}y(\tau _{n+l}), \end{aligned} $$ $$ \begin{aligned}[b] y\bigl(\tau ^{-}_{n+l} \bigr)&=\lim_{t\rightarrow \tau ^{-}_{n+l}}\prod_{0< \tau _{j}< t}(1-h_{j)}z(t) \\ &=\prod_{0< \tau _{j}< \tau _{n+l}}(1-h_{\tau _{j})}z\bigl(\tau ^{-}_{n+l}\bigr) \\ &=\prod_{0< \tau _{j}< \tau _{n+l}}(1-h_{\tau _{j})}z(\tau _{n+l})=y( \tau _{n+l}). \end{aligned} $$ Assumption 3.4 There exists a bounded domain \(U\subset E_{d}\) with regular boundary, then (\(A_{1}\)): In the open domain U and some neighborhood thereof, the smallest eigenvalue of the diffusion matrix \(A(x)\) is bounded away from zero; If \(x\in E_{d}\setminus U\), the mean time τ at which a path issuing from x reaches the set U is finite, and \(\sup_{x\in K}E_{x}\tau <\infty \) for every compact subset \(K\subset U_{d}\). Assumption 3.4 is a general assumption which is the condition for Lemma 3.6. If Assumption 3.4holds, the Markov process \(X(t)\)has a stationary distribution \(\mu (\cdot )\), and $$ \mathbb{P} \biggl\{ \lim_{T\rightarrow \infty }\frac{1}{T} \int ^{T}_{0}f \bigl(x(t) \bigr)\,dt= \int _{E_{d}}f(x)\mu (dx) \biggr\} =1, $$ where f is an integrable function with respect to the measure μ. The dynamics In the following theorem, we devote ourselves to investigating system (3.10). Theorem 4.1 If \(\frac{\beta }{A}\int ^{\infty }_{0}w\phi (w)\,dw< D+r\widetilde{c_{o}}+ \frac{\sigma ^{2}_{21}}{2}\)holds, then $$ \lim_{t\rightarrow +\infty }z(t)=0\quad \textit{a.s.}, $$ where for \(x\in (0,+\infty )\) $$ \begin{aligned} \phi (x)={}&Cx^{-2- \frac{2(2Dx_{0}\sigma _{12}+D\sigma _{11})}{\sigma ^{3}_{1}}}\times ( \sigma _{11}+\sigma _{12}x)^{-2+ \frac{2(2Dx_{0}\sigma _{12}+D\sigma _{11})}{\sigma ^{3}_{11}}} \\ &{} \times e^{-\frac{2}{\sigma _{11}(\sigma _{11}+\sigma _{12}x)}{( \frac{Dx_{0}}{x}+ \frac{2Dx_{0}\sigma _{12}+D\sigma _{11}}{\sigma _{11}}})}, \end{aligned} $$ and constant C satisfies that \(\int ^{\infty }_{0}\phi (x)\,dx=1\). Constructing the following auxiliary differential equation: $$ dW(t)=\bigl[D\bigl(x_{0}-W(t)\bigr)\bigr]\,dt+W(t) \bigl(\sigma _{11}+\sigma _{12}W(t)\bigr) \,dB_{1}(t), $$ with the initial value \(W(0)=x(0)>0\), we assume that \(W(t)\) is the solution of (4.3). Obviously, the following inequality can be obtained by the comparison theorem for stochastic differential equations: $$ w(t)\leq W(t)\quad \mbox{a.s.} $$ We set $$ a(w)=D\bigl(x_{0}-w(t)\bigr),\qquad \sigma (w)=w( \sigma _{11}+\sigma _{12}w),\quad w \in (0,+\infty ), $$ and compute the following indefinite integral: $$ \begin{aligned}[b] \int \frac{a(t)}{\sigma ^{2}(t)}\,dt={}& \int \frac{D(x_{0}-t)}{t^{2}(\sigma _{11}+\sigma _{12}t)^{2}}\,dt \\ ={}&\frac{2Dx_{0}\sigma _{12}+D\sigma _{11}}{\sigma ^{3}_{11}}\ln \frac{\sigma _{11}+\sigma _{12}t}{t} \\ &{}-\frac{Dx_{0}}{\sigma _{11}t(\sigma _{11}+\sigma _{12}t)}- \frac{2Dx_{0}\sigma _{12}+D\sigma _{11}}{\sigma ^{2}_{11}t(\sigma _{11}+\sigma _{12}t)}+C. \end{aligned} $$ $$ e^{\int \frac{a(t)}{\sigma ^{2}(t)}\,dt}=e^{C}\biggl( \frac{\sigma _{11}+\sigma _{12}t}{t}\biggr)^{ \frac{2Dx_{0}\sigma _{12}+D\sigma _{11}}{\sigma ^{3}_{11}}} e^{- \frac{1}{\sigma _{11}(\sigma _{11}+\sigma _{12}t)}(\frac{Dx_{0}}{t}+ \frac{2Dx_{0}\sigma _{12}+D\sigma _{11}}{\sigma _{11}})}. $$ $$ \begin{aligned}[b] \int ^{\infty }_{0}\frac{1}{\sigma ^{2}(w)}e^{\int ^{w}_{0} \frac{2a(s)}{\sigma ^{2}(s)}\,ds} \,dw={}& \int ^{\infty }_{0} w^{-2}(\sigma _{11}+ \sigma _{12}w)^{2}\biggl( \frac{\sigma _{11}+\sigma _{12}w}{w}\biggr)^{ \frac{2(2Dx_{0}\sigma _{12}+D\sigma _{11})}{\sigma ^{3}_{11}}} \\ &{}\times e^{-\frac{2}{\sigma _{11}(\sigma _{11}+\sigma _{12}w)}( \frac{2(2Dx_{0}\sigma _{12}+D\sigma _{11})}{\sigma ^{3}_{11}})}\,dw< \infty . \end{aligned} $$ This indicates that SDE (4.3) has the ergodic property. By the ergodic theorem, we have $$ \lim_{t\rightarrow +\infty }\frac{1}{t} \int ^{t}_{0}w(s)\,ds= \int ^{\infty }_{0}w\phi (w)\,dw \quad \mbox{a.s.} $$ Applying Itô's formula, we have $$ \begin{aligned}[b] d\ln z(t)={}&\biggl[ \frac{\beta w(t)}{A+w(t)+B\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t)}-D-rc_{0}(t) \\ &{}-\frac{1}{2}\biggl(\sigma _{21}+\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z(t) \biggr)^{2}\biggr]\,dt \\ &{}+\biggl(\sigma _{21}+\sigma _{22}\prod _{0< \tau _{n+l}< t}(1-h_{n+l})z(t)\biggr)\,dB_{2}(t) \\ \leq{}& \biggl[\frac{ \beta w(t)}{A}-D-rc_{o}(t)- \frac{\sigma ^{2}_{21}}{2} \\ &{}-\sigma _{21}\sigma _{22}\prod _{0< \tau _{n+l}< t}(1-h_{n+l})z(t)- \frac{(\prod_{0< \tau _{n+l}< t}(1-h_{n+l})\sigma _{22})^{2}}{2}z^{2}(t) \biggr]\,dt \\ &{} +\biggl(\sigma _{21}+\sigma _{22}\prod _{0< \tau _{n+l}< t}(1-h_{n+l})z(t)\biggr)\,dB_{2}(t). \end{aligned} $$ Integrating with respect to t from 0 to t on both sides of (4.10), we have $$ \begin{aligned}[b] \ln z(t)\leq{}& \frac{\beta }{A} \int ^{t}_{0}w(s)\,ds-\biggl(D+ \frac{\sigma ^{2}_{21}}{2}\biggr)t- \int ^{t}_{0}rc_{o}(s)\,ds \\ &{} - \frac{(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}}{2} \int ^{t}_{0}z^{2}(s)\,ds+ \sigma _{21}B_{2}(t)+M(t)+\ln y(0), \end{aligned} $$ where \(M(t)=\sigma _{22}\prod_{0<\tau _{n+l}<t}(1-h_{n+l})\int ^{t}_{0}z(s)\,dB_{2}(t)\) and its quadratic variation is given by $$ \langle M,M\rangle (t)=\biggl(\sigma _{22}\prod _{0< \tau _{n+l}< t}(1-h_{n+l})\biggr)^{2} \int ^{t}_{0}z^{2}(s)\,ds. $$ According to the exponential martingales inequality, for any positive τ, α, β, $$ P\biggl\{ \sup_{0\leq t\leq T}\biggl[M(t)- \frac{a}{2\langle M,M\rangle }(t)\biggr]>b\biggr\} \leq e^{-ab}. $$ Let \(T=k_{1}\), \(a=1\), \(b=\ln k_{1}\), then $$ P\biggl\{ \sup_{0\leq t\leq T}\biggl[M(t)- \frac{1}{2\langle M,M\rangle }(t)\biggr]>\ln k_{1}\biggr\} \leq \frac{1}{k_{1}}. $$ There exists random \(k^{0}_{1}\in k_{1}(\omega )\) such that \(k_{1}>k^{0}_{1}\) for almost all \(\omega \in \varOmega \). We can obtain the following by the Borel–Cantelli lemma: $$ \sup_{0\leq t\leq T}\biggl[M(t)- \frac{1}{2\langle M,M\rangle }(t)\biggr]\leq \ln k_{1}. $$ $$ \begin{aligned}[b] M(t)&\leq \ln k_{1}+ \frac{1}{2}\langle M,M\rangle (t) \\ & =\ln k_{1}+ \frac{(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}}{2} \int ^{t}_{0}z^{2}(s)\,ds, \\ &\quad \mbox{for all } 0< t< k_{1}, k_{1}>k^{0}_{1} \mbox{ a.s.} \end{aligned} $$ Considering (4.11) and (4.16), we have $$ \ln z(t)\leq \frac{\beta }{A} \int ^{t}_{0}w(s)\,ds-\biggl(D+ \frac{\sigma ^{2}_{21}}{2}\biggr)t- \int ^{t}_{0}rc_{o}(s)\,ds+\sigma _{21}B_{2}(t)+ \ln k_{1}+\ln y(0). $$ Then, for \(0\leq k_{1}-1\leq t\leq k_{1}\), we have $$ \begin{aligned}[b] \frac{\ln z(t)}{t}\leq{}& \frac{\beta }{At} \int ^{t}_{0}w(s)\,ds-\biggl(D+ \frac{\sigma ^{2}_{21}}{2}\biggr) \\ &{}-\frac{1}{t} \int ^{t}_{0}rc_{o}(s)\,ds+ \frac{\sigma _{21}B_{2}(t)}{t}+ \frac{\ln k_{1}}{k_{1}-1}+\frac{\ln y(0)}{t}. \end{aligned} $$ Taking the superior limit on both sides of (4.18), note that $$ \lim_{t\rightarrow +\infty }B_{2}(t)=0 \quad \mbox{a.s.} $$ and \(t\rightarrow +\infty \Rightarrow k_{1}\rightarrow +\infty \), we have $$ \lim_{k_{1}\rightarrow +\infty }\frac{\ln k_{1}}{k_{1}}=\lim _{k_{1} \rightarrow +\infty }\frac{1}{k_{1}}=0. $$ Then we obtain $$ \lim \sup_{t\rightarrow +\infty }\frac{\ln z(t)}{t}\leq \frac{\beta }{A} \int ^{+\infty }_{0}w\phi (w)\,dw-\biggl(D+r \widetilde{c_{o}}+ \frac{\sigma ^{2}_{21}}{2}\biggr). $$ This implies that if \(\frac{\beta }{A}\int ^{\infty }_{0}w\phi (w)\,dw< D+r\widetilde{c_{o}}+ \frac{\sigma ^{2}_{21}}{2}\) holds, then $$ \lim_{t\rightarrow +\infty }z(t)=0\quad \mbox{a.s.} $$ This completes the proof. □ According to Lemma 3.3 and Theorem 4.1, one can easily obtain $$ \lim_{t\rightarrow +\infty }y(t)=0\quad \mbox{a.s.} $$ If \(\frac{\beta }{A}\int ^{\infty }_{0}w\phi (w)\,dw< D+r\widetilde{c_{o}}+ \frac{\sigma ^{2}_{21}}{2}\)holds, the distribution of \(x(t)\)converges weakly to the measure which has the density \(\pi (x)\). For any small \(\varepsilon >0\), there exist \(t_{0}\) and a set \(\varOmega _{\varepsilon }\subset \varOmega \) such that \(\mathbb{P}(\varOmega _{\varepsilon })>1-\varepsilon \) and \(\frac{ xy}{k(A+x+By)}\leq \varepsilon x\) for \(t\geq t_{0}\) and \(\omega \in \varOmega _{\varepsilon }\). Then $$\begin{aligned}& \bigl[ \bigl(x_{0}-x(t) \bigr)-\varepsilon x(t) \bigr]\,dt+x(t) \bigl( \sigma _{11}+\sigma _{12}x(t) \bigr) \,dB_{1}(t) \\& \quad \leq dx(t)\leq \bigl[ \bigl(x_{0}-x(t) \bigr) \bigr]\,dt+x(t) \bigl( \sigma _{11}+\sigma _{12}x(t) \bigr) \,dB_{1}(t). \end{aligned}$$ This shows that the distribution of the process \(x(t)\) converges weakly to the measure with density \(\pi (x)\). □ If \(Dx_{0}\beta >(D+\sigma _{11}^{2}+ \frac{2\sigma _{12}Dx_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2} \sigma _{21}^{2})\)holds, system (3.10) admits a unique stationary distribution and it has ergodic property for initial \((w(0),z(0))\in \mathbb{R}^{2}_{+}\). $$\begin{aligned}& V^{\ast }(w,z) \\& \quad =M \biggl[-c_{1}\ln w-c_{2}l\ln z+ \frac{2c_{1}}{\theta (1-\theta )\sigma _{11}^{\theta }}(\sigma _{11}+ \sigma _{12}w)^{\theta } \biggr]+\frac{1}{p}w^{p}+\frac{1}{p}z^{p} \\& \quad \stackrel{\Delta }{=}MV_{1}^{\ast }+V_{2}^{\ast }, \end{aligned}$$ where \(\theta ,p\in (0,1)\), and M is a sufficiently large constant satisfying the following condition: $$ M \biggl[-2Dw_{0} \biggl(\sqrt{ \frac{Dx_{0}\beta }{(D+\sigma _{11}^{2}+\frac{2\sigma _{12}Dx_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2}\sigma _{21}^{2})}}-1 \biggr) \biggr]+f^{u}+g^{u} \leq -2, $$ $$\begin{aligned}& c_{1}= \frac{Dw_{0}}{1+\sigma ^{2}_{11}+\frac{2\sigma _{12}Dw_{0}}{(D-\theta )\sigma _{11}}}, \\& c_{2}=\frac{Dw_{0}}{D+rc^{u}_{o}+\frac{1}{2}\sigma ^{2}_{21}}, \\& f^{u}=\sup_{w\in R_{+}} \biggl\{ Dw_{0}w^{p-1}-Dw^{p}- \frac{1-p}{2}\sigma ^{2} _{12}w^{p+2} \biggr\} , \\& g^{u}=\sup_{w\in R_{+}} \biggl\{ -Dz^{p}-rc^{l}_{o}z^{p}- \frac{1-p}{2} \biggl[ \sigma _{22}\prod _{0< \tau _{n+l}< t}(1-h_{n+l}) \biggr]^{2}y^{p+2} \biggr\} . \end{aligned}$$ It shows that $$ \liminf_{\varepsilon \rightarrow 0, (w,z)\in R_{+}\setminus D}V^{ \ast }(w,z)=+\infty , $$ where \(D=(\varepsilon ,\frac{1}{\varepsilon })\times (\varepsilon , \frac{1}{\varepsilon })\) and \(V^{\ast }(w,z)\) is a continuous function. Therefore, \(V^{ \ast }(w,z)\) has a minimum point \((w_{0},z_{0})\) in the interior of \(\mathbb{R}^{2}_{+}\). Thus,we can define a nonnegative \(C^{2}\)-function \(V:\mathbb{R}^{2}_{+}\rightarrow \mathbb{R}_{+} \) $$ V(w,z)=V^{\ast }(w,z)-V^{\ast }(w_{0},z_{0}). $$ We can obtain the following equation by Itô's formula: $$ LV=LV^{\ast }=MLV^{\ast }_{1}+LV^{\ast }_{2}. $$ $$\begin{aligned} LV^{\ast }_{1} =&-\frac{c_{1}Dw_{0}}{w}+c_{1}D+ \frac{c_{1}\beta (\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)}{K[A+w+B(\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)]} \\ &{}+\frac{2c_{1}\sigma _{12}Dw_{0}}{(1-\theta )\sigma ^{\theta }_{11}}( \sigma _{11}+\sigma _{12}w)^{\theta -1}- \frac{2c_{1} \sigma _{12}w}{(1-\theta )\sigma ^{\theta }_{11}}(\sigma _{11}+\sigma _{12}w)^{ \theta -1} \\ &{}- \frac{2c_{1}\sigma _{12} \beta w (\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)}{(1-\theta )\sigma ^{\theta }_{11}k[A+w+B(\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)]}( \sigma _{11}+\sigma _{12}w)^{\theta -1} \\ &{}+\frac{c_{1}}{2}(\sigma _{11}+\sigma _{12}w)^{2}- \frac{c_{1}\sigma _{12}^{2}}{\sigma _{11}^{\theta }}(\sigma _{11}+ \sigma _{12}w)^{\theta }w^{2} \\ &{}-\frac{c_{2}\beta w}{A+w+B(\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)}+c_{2}D+c_{2}rc_{o}(t) \\ &{}+\frac{c_{2}}{2} \biggl[\sigma _{21}+\sigma _{22} \prod_{0< \tau _{n+l}< t}(1-h_{n+l})z \biggr]^{2} \\ \leq& -\frac{c_{1}Dw_{0}}{w}-c_{1} \beta w+c_{2}\beta w +c_{1}D+ \frac{c_{1} \beta w}{kA}+ \frac{2c_{1}\sigma _{12}Dw_{0}}{(1-\theta )\sigma 11} \\ &{}+c_{1}\sigma _{11}^{2}- \frac{c_{1}}{2}(\sigma _{11}-\sigma _{12}w)^{2}+c_{2}D+c_{2}rc^{u}_{o}+ \frac{c_{2}}{2} \biggl(\sigma _{21}+\sigma _{22} \prod_{0< \tau _{n+l}< t}(1-h_{n+l})z \biggr)^{2} \\ \leq& -2\sqrt{c_{1}c_{2}Dw_{0}\beta }+c_{1} \biggl[D+\sigma ^{2}_{11}+ \frac{2\sigma _{12}Dw_{0}}{(1-\theta )\sigma _{11}} \biggr]+c_{2} \biggl[D+rc^{u}_{o}+ \frac{\sigma ^{2}_{21}}{2} \biggr] \\ &{}+ \biggl[\frac{c_{1}\beta }{kA}+c_{2}\sigma _{21} \biggl( \sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}) \biggr) \biggr]w \\ &{}+ \frac{c_{2}(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}z^{2}}{2}+c_{2} \beta w \\ =&-2w_{0} \biggl(\sqrt{ \frac{w_{0}}{(1+\sigma _{11}^{2}+\frac{2\sigma _{12}w_{0}}{\sigma _{11}})(1+rc^{u}_{0}+\frac{1}{2}\sigma _{21}^{2})}}-1 \biggr) \\ &{}+ \biggl[\frac{c_{1}\beta }{kA}+c_{2}\sigma _{21} \biggl( \sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}) \biggr) \biggr]z \\ &{}+ \frac{c_{2}(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}}{2}z^{2}+c_{2} \beta w, \end{aligned}$$ where \(c_{1}\) and \(c_{2}\) are such that $$ c_{1} \biggl[D+\sigma _{11}^{2}+ \frac{2\sigma _{12}Dw_{0}}{(1-\theta )\sigma _{11}} \biggr]=c_{2} \biggl[D+rc^{u}_{o}+ \frac{1}{2}\sigma ^{2}_{21} \biggr]=Dw_{0}. $$ The function \(\frac{Dw_{0}\beta }{(D+\sigma _{11}^{2}+\frac{2\sigma _{12}Dw_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2}\sigma _{21}^{2})}>1\) is continuous. Choose \(\varepsilon >0\) sufficiently small such that $$\begin{aligned} LV^{\ast }_{1} \leq& -2Dw_{0} \biggl(\sqrt{ \frac{Dw_{0}\beta }{(D+\sigma _{11}^{2}+\frac{2\sigma _{12}Dw_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2}\sigma _{21}^{2})}}-1 \biggr) \\ &{}+ \biggl[\frac{c_{1}\beta }{kA}+c_{2}\sigma _{21} \biggl( \sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}) \biggr) \biggr]z \\ &{}+ \frac{c_{2}(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}z^{2}}{2}+c_{2} \beta w. \end{aligned}$$ We can also have $$\begin{aligned} LV^{\ast }_{2} \leq& D w_{0}w^{p-1}-Dw^{p}- \frac{\beta \prod_{0< \tau _{n+l}< t}(1-h_{n+l})zw^{p}}{k(A+w+B(\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)])} \\ &{}-\frac{1-p}{2}w^{p-2}w^{2}(\sigma _{11}+\sigma _{12}w)^{2}+ \frac{\beta w[\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z]^{p}}{A+w+B(\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z} \\ &{}-Dw^{p}-rc_{o}(t)w^{p}- \frac{1-p}{2} \biggl[\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z \biggr]^{p}\biggl( \sigma _{21}+\sigma _{22} \biggl[ \prod_{0< \tau _{n+l}< t}(1-h_{n+l})z \biggr]^{2}\biggr) \\ \leq& D w_{0}w^{p-1}-Dw^{p}- \frac{1-p}{2}\sigma _{12}^{2}w^{p+2}-Dw^{p}-rc^{l}_{o}z^{p} \\ &{}-\frac{1-p}{2}\sigma ^{2}_{22} \biggl[\prod _{0< \tau _{n+l}< t}(1-h_{n+l})z \biggr]^{p+2}+ \frac{\beta w (\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)^{p}}{A}. \end{aligned}$$ $$\begin{aligned} LV \leq& M \biggl\{ -2Dw_{0} \biggl(\sqrt{ \frac{Dw_{0}\beta }{(D+\sigma _{11}^{2}+\frac{2\sigma _{12}Dw_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2}\sigma _{21}^{2})}}-1 \biggr) \\ &{}+ \biggl[\frac{c_{1}\beta }{kA}+c_{2}\sigma _{21} \sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}) \biggr]z+ \frac{c_{2}(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}}{2}z^{2}+c_{2} \beta w \biggr\} \\ &{}+Dw_{0}x^{p-1}-Dx^{p}-rc^{l}_{o}z^{p} \\ &{}-\frac{1-p}{2}\sigma ^{2}_{22} \biggl[\prod _{0< \tau _{n+l}< t}(1-h_{n+l})z \biggr]^{p+2}+ \frac{\beta x (\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)^{p}}{A}. \end{aligned}$$ $$\begin{aligned} H(w,z) =& M \biggl\{ -2Dw_{0} \biggl(\sqrt{ \frac{Dw_{0}\beta }{(D+\sigma _{11}^{2}+\frac{2\sigma _{12}Dw_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2}\sigma _{21}^{2})}}-1 \biggr) \\ &{}+ \biggl[\frac{c_{1}\beta }{kA}+c_{2}\sigma _{21} \sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}) \biggr]z \\ &{}+ \frac{c_{2}(\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l}))^{2}}{2}z^{2}+c_{2} \beta w \biggr\} \\ &{}+f(x)+g(z)+ \frac{ \beta x (\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z)^{p}}{A}, \end{aligned}$$ where \(f(x)=Dw_{0}w^{p-1}-Dx^{p}-\frac{1-p}{2}\sigma _{12}^{2}x^{p+2}\) and \(g(z)=-Dx^{p}-rc^{l}_{o}z^{p} -\frac{1-p}{2}\sigma ^{2}_{22}[\prod_{0< \tau _{n+l}<t}(1-h_{n+l})z]^{p+2}\). Then we can get $$ H(w,z)\leq \textstyle\begin{cases} H(+\infty ,z)\rightarrow -\infty ,\quad \mbox{as } w\rightarrow + \infty , \\ H(w,+\infty )\rightarrow -\infty ,\quad \mbox{as } z\rightarrow + \infty , \\ M[-2Dw_{0}(\sqrt{ \frac{Dw_{0}\beta }{(D+\sigma _{11}^{2}+\frac{2\sigma _{12}Dw_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2}\sigma _{21}^{2})}}-1)] \\ \quad {}+f^{u}+g^{u}\leq -2,\quad \mbox{as } w\rightarrow 0^{+},z\rightarrow 0^{+}. \end{cases} $$ Therefore, there exists sufficiently small \(\varepsilon >0\) such that $$ LV\leq -1, \quad \mbox{for any } (w,z)\in \mathbb{R}^{2}_{+} \setminus \mathbb{D}, $$ where \(\mathbb{D}=(\varepsilon ,\frac{1}{\varepsilon })\times (\varepsilon , \frac{1}{\varepsilon })\). On the other hand, the diffusion matrix of system (3.10) is given by $$\begin{aligned} \sum_{i,j=1}^{2}a_{ij}(w,z) \xi _{i}\xi _{j} =& \biggl( \bigl(\sigma _{11}w+ \sigma _{12}w^{2} \bigr) \xi _{1}, \biggl( \sigma _{21}z+\sigma _{22} \prod_{0< \tau _{n+l}< t}(1-h_{n+l})z^{2} \biggr) \xi _{2} \biggr) \\ &{}\times \begin{pmatrix} (\sigma _{11}w+\sigma _{12}w^{2})\xi _{1} \\ (\sigma _{21}z+\sigma _{22}\prod_{0< \tau _{n+l}< t}(1-h_{n+l})z^{2}) \xi _{2} \end{pmatrix} \\ =& \bigl(\sigma _{11}w+\sigma _{12}w^{2} \bigr)\xi ^{2}_{1}+ \biggl(\sigma _{21}z+ \sigma _{22} \prod_{0< \tau _{n+l}< t}(1-h_{n+l})z^{2} \biggr)\xi ^{2}_{2} \\ \geq& G \Vert \xi \Vert ^{2} \quad \mbox{for any } (w,z)\in D_{e}\subset \mathbb{D}^{2}_{+}, \end{aligned}$$ where \(\xi =(\xi _{1}, \xi _{2})\in \mathbb{D}^{2}_{+}\), \(G=\min_{(w,z)\in D_{e}}\{(\sigma _{11}w+\sigma _{12}w^{2})\xi ^{2}_{1}+( \sigma _{21}z+\sigma _{22}\prod_{0<\tau _{n+l}<t}(1- h_{n+l})z^{2}) \xi ^{2}_{2}\}\), and \(D_{e}=[\frac{1}{e},e]\times [\frac{1}{e},e]\). From Theorem 4.7 in reference [9], it can be known that system (3.11) is ergodic and has a unique stationary distribution, and the distribution of the process converges weakly to the measure with density. □ From Lemma 3.3 and Theorem 4.4, we can easily know that if \(Dx_{0}\beta >(D+\sigma _{11}^{2}+ \frac{2\sigma _{12}Dx_{0}}{\sigma _{11}})(D+rc^{u}_{o}+\frac{1}{2} \sigma _{21}^{2})\) holds, system (2.1) admits a unique stationary distribution and it has ergodic property for initial \((x(0),y(0))\in \mathbb{R}^{2}_{+}\). If it is assumed that \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1 A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(g=0.5\), \(m=0.5\), \(h=0.1\), \(h_{1}=0.02\), \(h_{2}=0.2\), \(\sigma _{11}=0.01\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\), when \(\mu =0.1\), the microorganism \(y(t)\) will survive (it can be seen in (a) of Fig. 1), when \(\mu =0.9\), the microorganism \(y(t)\) will be extinct(it can be seen in (b) of Fig. 1). From Theorem 4.1, Remark 4.5, and the computer simulations in Fig. 1, we conjecture that there must exist a threshold \(\mu ^{\ast }\), if \(\mu >\mu ^{\ast }\), the microorganism \(y(t)\) will be extinct, if \(\mu <\mu ^{\ast }\), the microorganism \(y(t)\) will survive. If it is assumed that \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(g=0.5\), \(m=0.5\), \(h=0.1\), \(\mu =0.1\), \(h_{2}=0.2\), \(\sigma _{11}=0.01\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\), when \(h_{1}=0.01\), the microorganism \(y(t)\) will survive (it can be seen in (c) of Fig. 2), when \(h_{1}=0.1\) the microorganism \(y(t)\) will be extinct(it can be seen in (d) of Fig. 2). From Theorem 4.1, Remark 4.5, and the computer simulations in Fig. 2, we conjecture that there must exist a threshold \(h_{2}^{\ast }\), if \(h_{2}< h_{2}^{\ast }\), the microorganism \(y(t)\) will be extinct, if \(h_{2}>h_{2}^{\ast }\), the microorganism \(y(t)\) will survive. If it is assumed that \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(G=0.5\), \(M=0.5\), \(h=0.1\), \(\mu =0.1\), \(h_{1}=0.07\), \(\sigma _{11}=0.01\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\), when \(h_{2}=0.9\), the microorganism \(y(t)\) will survive (it can be seen in (e) of Fig. 3), when \(h_{2}=0.1\), the microorganism \(y(t)\) will be extinct (it can be seen in (f) of Fig. 3). From Theorem 4.1, Remark 4.5, and the computer simulations in Fig. 3, we conjecture that there must exist a threshold \(h_{1}^{\ast }\), if \(h_{1}< h_{1}^{\ast }\), the microorganism \(y(t)\) will be extinct, if \(h_{1}>h_{1}^{\ast }\), the microorganism \(y(t)\) will survive. If it is assumed that \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(g=0.5\), \(m=0.5\), \(h=0.1\), \(\mu =0.1\), \(h_{1}=0.02\), \(h_{2}=0.1\), \(\sigma _{12}=0.1\), \(\sigma _{21}=0.05\), \(\sigma _{22}=0.05\), \(l=0.25\), \(\tau =4\), and when \(\sigma _{11}=0.3\), the microorganism \(y(t)\) will survive (it can be seen in (g) of Fig. 4), when \(\sigma _{11}=0.6\), the microorganism \(y(t)\) will be extinct(it can be seen in (h) of Fig. 4). From Theorem 4.1, Remark 4.5, and the computer simulations in Fig. 4, we conjecture that there must exist a threshold \(\sigma _{11}^{\ast }\), if \(\sigma _{11}<\sigma _{11}^{\ast }\), the microorganism \(y(t)\) will be extinct, if \(\sigma _{11}>\sigma _{11}^{\ast }\), the microorganism \(y(t)\) will survive. Threshold analysis of parameter μ in system (2.1) with \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(g=0.4\), \(m=0.5\), \(h=0.1\), \(h_{1}=0.02\), \(h_{2}=0.2\), \(\sigma _{11}=0.01\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\), (a): \(y(t)\) survival with parameter \(\mu =0.1\); (b): \(y(t)\) extinction with parameter \(\mu =0.9\) Threshold analysis of parameter \(h_{1}\) in system (2.1) with \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(g=0.5\), \(m=0.5\), \(h=0.1\), \(\mu =0.1\), \(h_{2}=0.2\), \(\sigma _{11}=0.01\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\), (c): \(y(t)\) survival with parameter \(h_{1}=0.01\); (d): \(y(t)\) extinction with parameter \(h_{1}=0.1\) Threshold analysis of parameter \(h_{2}\) in system (2.1) with \(x(0)=0.8\), \(y(0)=0.3\), \(c_{o}(0)=0.9\), \(c_{e}(0)=0.9\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=0.05\), \(B=0.002\), \(r=0.1\), \(f=0.1\), \(g=0.5\), \(m=0.5\), \(h=0.1\), \(\mu =0.1\), \(h_{1}=0.07\), \(\sigma _{11}=0.01\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\), (e): \(y(t)\) survival with parameter \(h_{2}=0.9\); (f): \(y(t)\) extinction with parameter \(h_{2}=0.1\) Threshold analysis of parameter \(\sigma _{11}\) in system (2.1) with \(x(0)=0.3\), \(y(0)=0.3\), \(c_{o}(0)=0.3\), \(c_{e}(0)=0.3\), \(D=1\), \(\beta =0.1\), \(k=0.1\), \(A=1.5\), \(B=1\), \(r=0.2\), \(f=0.1\), \(g=0.5\), \(m=0.5\), \(h=0.1\), \(\mu =0.2\), \(h_{1}=0.3\), \(h_{2}=0.2\), \(\sigma _{12}=0.01\), \(\sigma _{21}=0.01\), \(\sigma _{22}=0.01\), \(l=0.25\), \(\tau =4\). (g): \(y(t)\) survival with parameter \(\sigma _{11}=0.03\); (h): \(y(t)\) extinction with parameter \(\sigma _{11}=0.4\) In this work, we consider a stochastic eutrophication-chemostat model with impulsive dredging and pulse inputting on environmental toxicant. The sufficient condition for the extinction of microorganisms is obtained. The sufficient condition for the investigated system with unique ergodic stationary distribution is also obtained by the Lyapunov functions method. The results of mathematical analysis and numerical analysis show that the stochastic noise, impulsive diffusion, and pulse input on environmental toxicant play important roles in the extinction and survival of the microorganisms. These results indicate the effective and reliable controlling strategy for water resource management with eutrophication. Smith, H., Waltman, P.: The Theory of the Chemostat: Dynamics of Microbial Competition. Cambridge University Press, Cambridge (1995) Li, Z., Chen, L., Liu, Z.: A chemostat model with variable yield and impulsive state feedback control is considered. Appl. Math. Model. 36, 1255–1266 (2012) Zhang, X., Yuan, R.: Sufficient and necessary conditions for stochastic near-optimal controls: a stochastic chemostat model with non-zero cost inhibiting. Appl. Math. Model. 78, 601–626 (2020) Lakshmikantham, V.: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) Jiao, J., et al.: Dynamics of a stage-structured predator–prey model with prey impulsively diffusing between two patches. Nonlinear Anal., Real World Appl. 11, 2748–2756 (2010) Jiao, J., et al.: Dynamical analysis of a five-dimensioned chemostat model with impulsive diffusion and pulse input environmental toxicant. Chaos Solitons Fractals 44, 17–27 (2011) Lv, X., Meng, X., Wang, X.: Extinction and stationary distribution of an impulsive stochastic chemostat model with nonlinear perturbation. Chaos Solitons Fractals 110, 273–279 (2018) Zhou, P.: Reservoir desilting is an effective way to alleviate water crisis. Energy Conserv. Environ. Prot. 4, 2014S3919 (2014) Mao, X.: Stochastic Differential Equations and Applications. Elsevier, Amsterdam (2007) Wang, K.: Stochastic Biomathematical Models. Science Press, Beijing (2010) (in Chinese) Liu, Q., Jiang, D.: Stationary distribution and extinction of a stochastic SIR model with nonlinear perturbation. Appl. Math. Lett. 73, 8–15 (2017) Data sharing not applicable to this article as no datasets were generated or analysed during the current study. This paper is supported by the National Natural Science Foundation of China (11761019, 11361014), the Science Technology Foundation of Guizhou Education Department (20175736-001, 2008038), the Science Technology Foundation of Guizhou (2010J2130),the Project of High Level Creative Talents in Guizhou Province (No. 20164035), and the Joint Project of Department of Commerce and Guizhou University of Finance and Economics (No. 2016SWBZD18). School of Mathematics and Statistics, Guizhou University of Finance and Economics, Guiyang, 550025, P.R. China Jianjun Jiao Key Laboratory for Information System of Mountainous Area and Protection of Ecological Environment of Guizhou Province, Guizhou Normal University, Guiyang, 550001, P.R. China Qiuhua Li All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Correspondence to Jianjun Jiao. The authors declare that there is no conflict of interests. Jiao, J., Li, Q. Dynamics of a stochastic eutrophication-chemostat model with impulsive dredging and pulse inputting on environmental toxicant. Adv Differ Equ 2020, 447 (2020). https://doi.org/10.1186/s13662-020-02905-5 Stochastic eutrophication-chemostat model Impulsive dredging Pulse input
CommonCrawl
Rhamnolipid production by a gamma ray-induced Pseudomonas aeruginosa mutant under solid state fermentation Ghadir S. El-Housseiny1, Khaled M. Aboshanab ORCID: orcid.org/0000-0002-7608-850X1, Mohammad M. Aboulwafa1 & Nadia A. Hassouna1 AMB Express volume 9, Article number: 7 (2019) Cite this article Solid-state fermentation has a special advantage of preventing the foaming problem that obstructs submerged fermentation processes for rhamnolipid production. In the present work, a 50:50 mixture of sugarcane bagasse and sunflower seed meal was selected as the optimum substrate for rhamnolipid production using a Pseudomonas aeruginosa mutant 15GR and an impregnating solution including 5% v/v glycerol. Using Box–Behnken design, the optimum fermentation conditions were found to be an inoculum size 1% v/v, temperature 30 °C and unlike other studies, pH 8. These optimized conditions yielded a 67% enhancement of rhamnolipid levels reaching 46.85 g rhamnolipids per liter of impregnating solution, after 10 days, which was about 5.5 folds higher than that obtained by submerged liquid fermentation. Although maximum rhamnolipids concentration was obtained after 10 days of incubation, rhamnolipids concentration already reached high levels (41.87 g/l) after only 6 days. This rhamnolipid level was obtained in a shorter time and using lower carbon source concentrations than most studies reported so far. The findings obtained indicate an enormous potential for employing solid-state fermentation for rhamnolipid production by the studied isolate. Rhamnolipids (RLs) are promising biosurfactants mainly used for environmental applications because of their impressive emulsifying and surface active properties. However, their use is limited because of their elevated costs relative to that of chemical surfactants (Noh et al. 2014). Research on RLs production was mainly directed to submerged liquid fermentation (SLF) until recently. This production method creates serious foaming problems which are expensive to combat (Sodagari and Ju 2014; Winterburn and Martin 2012). Instead, solid-state fermentation (SSF) which has great potential for RL production has been introduced (Singhania et al. 2009; Narendrakumar et al. 2017). SSF is a biological process performed in the absence of free water; using a substrate having sufficient moisture to aid in microbial growth and metabolic activity. The solid substrate could either be an inert material supporting the microorganism's growth on it or the source of nutrients (Thomas et al. 2013). The potential of SSF is to offer the microbes an environment very similar to the natural environment where they normally live. This is probably the main reason why higher product concentrations are obtained using SSF in comparison to SLF (Thomas et al. 2013). The substrates utilized in SSF are usually agro-industrial residues or by-products and this not only offers economic value to these wastes, but also resolves their disposal problem and therefore reduces pollution. Moreover, the use of these low cost residues makes the bioprocess economically attractive. Therefore, these environmental benefits have shifted the industrial manufacturing towards SSF due to the increased demand for ecofriendly processes rather than chemical processes (Thomas et al. 2013). Other advantages of the SSF over the SLF are: smaller volume of fermentor; removed stirring costs; lower sterilization energy costs; reduced product recovery costs; lower contamination risk since the environment is less favourable for many bacteria (Mussatto et al. 2012). Only a few studies on the RL production using SSF have been reported so far (Camilios-Neto et al. 2008, 2011). Accordingly, the present work aims at studying the various physiological parameters influencing RL production by P. aeruginosa mutant 15GR under SSF using Response Surface Methodology (RSM). The P. aeruginosa 15GR from Culture Collection Ain Shams University (CCASU) (strain number, CCASU-P15GR) is a RL hyperproducer mutant obtained by gamma radiation of P. aeruginosa isolate P6 (CCASU-P6) in our previous study (El-Housseiny et al. 2017). This isolate was preserved in Luria–Bertani (LB) broth (Lab M, Topley house, England) containing 20% glycerol at − 80 °C. The mineral salts medium (MSM) (Bodour et al. 2003) containing 2% v/v glycerol as the sole carbon source (named GMSM) was prepared and used in this study. The pH of this media was adjusted to 7 using KOH pellets. Production of RLs A loopful from P. aeruginosa 15GR was inoculated into 25 ml trypticase soy broth contained in an Erlenmeyer flask (250 ml) and incubated overnight at 30 °C and 250 rpm. The resulting culture was centrifuged (10,000 rpm for 10 min) and the cells were then washed once and resuspended in GMSM to obtain a count of 5 × 109 cfu/ml. Production of RL by SSF using different solid substrates Each Erlenmeyer flask (250 ml) contained 10 g of one or a mixture of two of the following solid substrates (dried at room temperature): sugarcane bagasse (residue remaining after extraction of sugarcane juice from sugarcane stalks obtained from a local market, chopped into small fragments), sunflower seed meal (sunflower seeds were obtained from a local market, grinded and passed through a mesh sieve with 1.4 mm openings), corn bran, soybean meal, wheat bran, rice straw (all obtained from a local market). In each case, the total initial dry mass was 10 g (Table 1). The flasks were then sterilized by autoclaving for 15 min at 121 °C. Impregnating solution used was GMSM and its used amount was different from one substrate to another, depending on the substrate's liquid absorption capacity (see Table 1) (Camilios-Neto et al. 2011). This solution was inoculated with 0.4 ml of seed culture, and mixed with the solid substrate (final bacterial concentration = 2 × 109 cfu per 10 g solid substrate). The flasks were then incubated at 30 °C for 6 days without agitation. Control flasks containing the different substrates were treated similar to the test ones but were left uninoculated (Camilios-Neto et al. 2011). Table 1 Different solid substrates used and their liquid absorption capacity (ml) of impregnating solution Extraction of RLs Aliquots of 50 ml of distilled water were added to each flask at the end of the incubation period and these flasks were agitated for 1 h at 30 °C and 200 rpm. The obtained suspension was filtered through gauze pieces, and the remaining liquid was manually squeezed out then added to the filtrate. The whole process was repeated twice (Camilios-Neto et al. 2011). The filtrates were then pooled and centrifuged for 10 min at 10,000 rpm to collect the supernatant. In case of sunflower seed meal and mixtures containing this substrate, supernatant was found to contain residual oil. Therefore, these supernatants were vigorously shaken with n-hexane 1:1 (v/v) to remove residual oil and centrifuged (10,000 rpm, 10 min) to separate the aqueous and n-hexane phases. This was done to avoid interference during orcinol assay (Kosaric and Vardar-Sukan 2014). The aqueous phase was then used for RL quantification. RL concentrations were expressed first as the product mass per kilogram of initial dry solids (g/kg IDS). In addition, to compare with results obtained in SLF, we expressed RL concentration as grams per liter of impregnating solution added to the solid substrate (g/l IS) (Camilios-Neto et al. 2011). Quantification of RLs RL concentration was obtained using the modified colorimetric orcinol assay (Chandrasekaran and BeMiller 1980; Koch et al. 1991; Abdel-Mawgoud et al. 2009). First, RLs in the supernatant were extracted as explained by Wu and Ju (Wu and Ju 1998) using ethyl acetate. The separated organic phase was then evaporated at 80 °C and the resulting residue was dissolved in distilled H2O adjusted to pH 7 using 2.5 N NaHCO3. An aliquot of 900 μl orcinol reagent (0.19% orcinol in 53% H2SO4) was added to 100 μl of this aqueous extract and heated in a water bath (80 °C for 30 min). The mixture was allowed to cool to room temperature and the absorbance of the developed color (A421) was measured against blank (Daoshan et al. 2004). The concentration of RL in the supernatant was calculated from an equation of a calibration curve prepared using a standard RL (AgSciTech Inc., Logan, Utah, USA) (A421 nm = 0.0047 × RL concentration), considering the dilution factor (D.F.) of the diluted aqueous extract, as follows: $${\text{Concentration of RL }}\left( {{\text{mg}}/{\text{l}}} \right)\, = \,\left( {{\text{A}}_{ 4 2 1} /0.00 4 7} \right)\, \times \,{\text{D}}.{\text{F}}.$$ Studying the different factors affecting RL production by P. aeruginosa 15GR using SSF Studying the time course of RL production in SSF using the selected substrate (sugarcane bagasse and sunflower seed meal) and comparing it to the production in SLF Six Erlenmeyer flasks containing the selected substrate were prepared. Twenty milliliters impregnating solution (GMSM) inoculated with 0.4 ml of seed culture (2% v/v) was mixed with the solid substrate and these flasks were then incubated at 30 °C. Over an incubation period of 12 days, one flask was removed at specific time intervals for extraction and determination of RL concentration. One flask was left uninoculated and served as a control. To compare between SSF and SLF, the production process was also carried out in 250 ml Erlenmeyer flasks containing 50 ml of GMSM. These flasks were inoculated with the seed culture prepared above (2% v/v) and incubated at 250 rpm and 30 °C. At specified time intervals, samples were taken from the fermentation broth for RL quantification. Effect of agitation rate In these experiments, two flasks were prepared as described above for SSF; one was incubated at 30 °C without agitation and the other incubated at 30 °C with an agitation rate of 250 rpm. After incubation, RLs were extracted as described. Control uninoculated flasks were prepared and treated similarly. Effect of using variable concentrations of glycerol in impregnating solution Flasks (250 ml) containing sugarcane bagasse and sunflower seed meal were prepared and sterilized. Twenty milliliters aliquots of MSM containing different concentrations of glycerol (2%, 5%, 10% v/v) were inoculated with seed culture (2% v/v), mixed with the solid substrate and incubated at 30 °C. After incubation, RLs were extracted as described above. Control uninoculated flasks were prepared and treated similarly. Response surface methodology (RSM) for optimizing RL production using SSF Factors such as inoculum size (represented by the code A), temperature (represented by the code B) and pH (represented by the code C) were optimized by RSM. Experimental Box–Behnken design (BBD) was employed and the factors and levels used for these experiments were: inoculum size of 1, 2 or 5% v/v; temperature of 30, 33.5 or 37 °C; and pH of 6, 7 or 8. A total of 13 runs were carried out with 1 centerpoint, each having an uninoculated control treated similarly. One response value, the RL concentration (RL, g/l) was measured accordingly after 10 days of incubation. The design of experiments was carried out by Design Expert® v. 7.0 (DesignExpert ® Software, Stat-Ease Inc., Statistics Made Easy, Minneapolis, MN, USA). Experimental verification of RSM results A new SSF experiment was performed using optimal culture conditions recommended by the numerical optimization function in the Design Expert software. The RL production was measured and compared with results predicted by the model. Studying the time course of RL production by P. aeruginosa 15GR under optimized conditions Six Erlenmeyer flasks containing 10 g of sugarcane bagasse and sunflower seed meal mixture were prepared. Twenty milliliters of impregnating solution (MSM + glycerol 5%) inoculated with 1% v/v of seed culture was mixed into the solid substrate and these flasks were incubated at 30 °C, initial pH 8 with no agitation. Over an incubation period of 12 days, one flask was removed at specific time intervals for extraction and determination of RL concentration. One flask was left uninoculated and served as a control. Production of RLs by SSF using different solid substrates As shown in Fig. 1, sugarcane bagasse, sunflower seed meal and corn bran gave the highest results as single substrates. Note that these three flasks produced large amounts of foam during extraction. However, when comparing the RL production in the different mixtures tested, the highest RL level of 28.25 g/l IS (56.5 g/kg IDS) was obtained with the mixture of sugarcane bagasse and sunflower seed meal after 6 days of incubation and this flask produced the highest foam up on extraction. Therefore, this solid state mixture was chosen for further experiments. Effect of different solid substrates on RL production by P. aeruginosa 15GR in SSF after 6 days of incubation at 30 °C. Values plotted are the means of triplicate results while error bars indicate the standard deviation of the data Different factors affecting RL production by P. aeruginosa 15GR using SSF Time course of RL production in SSF using the selected substrate (sugarcane bagasse + sunflower meal) and in SLF Figure 2 showed the profile of RL production in both SLF and SSF. In case of SSF, the RL level increased linearly at the beginning, to reach 28 g/l-IS (56 g/kg-IDS) after 6 days of incubation. This level continued to increase, but slowly, reaching 31.65 g/l-IS (63.3 g/kg-IDS) after 10 days of incubation and a decline in RL production was observed after further incubation. Therefore, results in subsequent experiments were obtained after 10 days of incubation. Time course of RL production by P. aeruginosa 15GR in a SLF using GMSM culture media, 30 °C and 250 rpm; b SSF of a 50:50 mixture of sugarcane bagasse and sunflower seeds, using GMSM as an impregnating solution. Values plotted are the means of triplicate results while error bars indicate the standard deviation of the data Using SLF, maximum RL production by P. aeruginosa 15GR was obtained at day 6, reaching 8.45 g/l only (Fig. 2). RL production in SSF approach using an agitation rate of 250 rpm resulted in a RL production of 30.5 ± 0.25 g/l-IS. Therefore, no shaking was used in subsequent experiments since no significant change was obtained as compared to that with no shaking (31.5 ± 0.5 g/l-IS). Effect of variable concentrations of glycerol in impregnating solution Flasks containing different concentrations of glycerol contained in impregnating solution were tested. As delineated in Fig. 3, 5% v/v glycerol resulted in the highest RL production of 37.25 g/l-IS (74.5 g/Kg IDS) after 10 days incubation at 30 °C. Therefore, this concentration was used for further experiments. Effect of variable glycerol concentrations on RL production by P. aeruginosa 15GR in SSF. Values plotted are the means of triplicate results while error bars indicate the standard deviation of the data After carrying out the experiments suggested by Design Expert software, the observed responses were recorded (Table 2). From these responses, the software automatically suggests a model which is a good-fitting mathematical function relating the response with the input factors tested. Predicted responses are then calculated from this fitted equation (Table 2) and used to estimate residuals and construct statistical and graphical summaries by the Design Expert software. The coefficients in this equation compensate for the differences in the ranges of the factors as well as the differences in the effects. These coefficients cannot be intuitively interpreted due to their dependence on the scaling of the factor levels (https://www.statease.com/pubs/handbk_for_exp_sv.pdf). This fitted equation is given by Eq 1: $${\text{RL}}\, = \, 80.97796-1.74507*{\text{A}}-2.3*{\text{B}}\, + \, 4.50625*{\text{C}}$$ Table 2 Experimental Box–Behnken design (BBD) with the actual values of the independent factors inoculum size (A), temperature (B) and pH (C) and the observed and predicted responses ANOVA results are displayed in Table 3. The Model F-value of 36.99 for RL production implied the significance of the model, since there is only a 0.01% chance that this large "Model F-Value" could result due to noise (P value < 0.0001). Moreover, A, B and C were significant factors (Table 3). Low coefficient of variation (CV) value of 8.62% was obtained which indicates that the experimental values were of adequate reliability. The coefficient of determination R2 was 0.9250, indicating that 92.50% of variability in the response can be interpreted by the model. The Predicted R-Squared (Pred R2) of 0.8620 was in acceptable agreement with the Adjusted R-Squared (AdjR2) of 0.9. An adequate precision ratio of 17.487 was recorded which suggested an adequate signal and that the present model could be used to navigate the design space. Table 3 The analysis of variance (ANOVA) for the response surface linear model regarding RL concentration (RL) The 3D plots between the input factors are shown in Fig. 4. From these plots and using numerical optimization function in the Design expert software, optimum conditions for maximum RL production were found to be an inoculum size of 1%, temperature of 30 °C and pH of 8. Three dimensional (3D) surface plots for the effects of temperature, inoculum size and pH on RL production by P. aeruginosa 15GR using SSF (obtained from Design Expert software). a pH is fixed at 8, b Temperature is fixed at 30 °C, c Inoculum size is fixed at 1% v/v Experimental verification test Using these recommended optimum levels of the three factors (30 °C, 1% and pH 8), RL concentration reached 46.85 g/l-IS. This value was very similar to the value predicted by the model (46.28 g/l-IS) which reflects the accuracy and usefulness of the RSM to optimize the RL production process. Model diagnostics Box Cox plot The Box–Cox plot showed that no further transformation was needed and the model was proven to be sufficient (Fig. 5a). Diagnostic plots for the effects of temperature, inoculum size and pH on RL production by P. aeruginosa 15GR using SSF (obtained from Design Expert software) a Box–Cox plot. b Predicted vs. actual plot. c Residuals vs. run plot The predicted versus actual plot As shown in Fig. 5b, the values were distributed close to the straight line, which implied that actual and predicted values were very close to each other. Residuals vs Run plot showed that the points are scattered around zero suggesting that the model fit the data (Fig. 5c). Time course of RL production by P. aeruginosa 15GR under optimized conditions The time course profile, using a 50:50 mixture of sugarcane bagasse and sunflower seed meal, supplemented with impregnating solution containing 5% v/v glycerol using optimized conditions (temperature 30 °C, inoculum size 1% v/v and pH 8) was tested. As shown in Fig. 6, production increased rapidly during the first 6 days of incubation, then a slight increase was noticed at day 10. However, the RL concentration this time was higher (46.85 g/l-IS (93.7 g/kg-IDS)) than the original process (31.65 g/l-IS (63.3 g/kg-IDS)). Time course of the RL production by P. aeruginosa 15GR in SSF using optimized conditions. Values plotted are the means of triplicate results while error bars indicate the standard deviation of the data Recently, SSF has built up reliability in many industries and has evolved as an interesting substitute to SLF (Singhania et al. 2009). Severe foaming problems usually result from the production of biosurfactants in SLF and therefore, researches have suggested their production in SSF. Of these biosurfactants, RLs have been the most attractive for their production in SSF in recent years. The present work aimed at the optimization of the fermentation conditions for RL production in SSF. Twelve different solid substrates or combinations of solid substrates were screened for the production of RLs by the hyperproducing mutant 15GR using GMSM as the impregnating solution. Selection of an appropriate substrate is an important feature of SSF since it acts as both a source of nutrients and a physical support (Pandey 2003). As shown in the results, the highest RL level was obtained with the mixture of sugarcane bagasse and sunflower seed meal. Sugarcane bagasse, a porous residue obtained from cane stalks after the juice extraction from sugarcane (Soccol et al. 2010) consists chiefly of cellulose and hemicelluloses, lignin, nitrogenous compounds, and ash (Abdullah et al. 2006). Sunflower seed meal is obtained by grinding sunflower seeds which are rich in lipids, carbohydrates and proteins (Alberton et al. 2010). The high RL levels obtained using this mixture may be due to the fact that oils usually stimulate RL production, as reported in previous SLF studies (Benincasa and Accorsini 2008; Costa et al. 2006; Trummler et al. 2003). Another explanation may be that the mixture of substrates resulted in a substrate bed with properties superior to single substrates. Using sunflower seed meal alone caused the solid substrate to compact considerably probably due to its high lipid content. Sugarcane bagasse acts as a bulking agent, improving the substrate bed properties (Alberton et al. 2010). After screening for the best substrate, our first experiment was a kinetic study to find out the time required for maximum RL production. As shown in the results, a maximum RL concentration of 31.65 g/l-IS (63.3 g/kg-IDS) was obtained after 10 days of incubation. Therefore, results in subsequent experiments were obtained at this time. Upon comparing RL production in SSF with that resulting from SLF using the same impregnating solution as culture media, it was found that SSF (using sugarcane bagasse and sunflower seed meal) resulted in over a threefold increase in RL production, which further proves the superiority of this process. In an attempt to improve RL production, the effect of increasing the glycerol concentration in the impregnating solution was tested. As shown in the results, the highest RL yield (37.25 g/l-IS (74.5 g/Kg-IDS)) was observed with 5% v/v glycerol in the impregnating solution after 10 days incubation at 30 °C. To optimize the culture conditions required for maximum RL production using SSF, RSM, the most efficient and straight forward statistical approach that permits concurrent measurement of several process variables, was carried out (Chen et al. 2012). Box–Behnken experimental design was chosen to optimize 3 factors; inoculum size, temperature and pH. The Box–Behnken design (BBD) is a convenient approach to find out the effects of different factors and their interactions on the responses. It usually takes three levels of each factor and all the design points lie within the safe operating region. The advantages of BBD are that it is considered to be more efficient, more powerful, requires fewer experimental runs than other designs such as Central Composite Design and three-level full factorial design, and hence is cheaper (Marasini et al. 2012). ANOVA verifies the adequacy of the models and the P value is used as a tool to determine the significance of each of the studied factors. ANOVA results suggested that the model equation derived is significant and could adequately be used to describe the RL production by SSF (P value < 0.0001). Low coefficient of variation (CV) indicates that the experimental values were of adequate reliability. The CV reveals the precision level with which the treatments are compared, and the experiment reliability decreases as the CV value increases (Ghribi et al. 2012). Adequate (Adeq) Precision measures the signal to noise ratio, and a ratio more than 4 is commonly preferable (Abdel-Hafez et al. 2014). An adequate precision ratio of 17.487 in our study suggested an adequate signal and that the present model could be used to navigate the design space and could adequately be used to describe the RL production by SSF with P. aeruginosa 15GR. The 3 D plots are plots that present details about the interaction between two factors and permit a simple prediction of the optimal conditions (Ghribi et al. 2012). From these plots and using numerical optimization function, optimum conditions for maximum RL production were found to be an inoculum size of 1%, temperature of 30 °C and pH of 8, resulting in a RL concentration of 46.85 g/l IS. The obtained model diagnostic plots also proved the validity of the model constructed in this study. ANOVA results also revealed that all three factors had a significant effect on RL concentration. A higher inoculum size usually enhances microbial growth and other associated microbial activities of the microorganism until a certain value after which there could be a decrease in microbial activity as a result of nutrient limitations (Kashyap et al. 2002). In the present study, optimum inoculum size was found to be 1%v/v. This may be due to the severe competition among bacteria when inoculum size was increased, leading to change in metabolism towards a survival pattern. Alternatively, this may be because an increase in the initial inoculum size stimulated an earlier initiation of RL production instead of increasing cell concentration, which resulted in lower final biomass and hence lower final RL concentrations. Another critical factor affecting RL production is temperature. Wei et al. (2005) measured RL production and showed that 30 °C to 37 °C was the optimum temperature range. In this study, RL production reached a maximum at 30 °C. The pH also greatly influences many microbial metabolites production. Most of the previous studies reported that a pH range from 6 to 7 resulted in maximum RL production in different Pseudomonas species, depending on the strain used (Chen et al. 2007; Zhu et al. 2012). Moreover, Mulligan et al. (2014) reported that P. aeruginosa does not produce RLs at a pH higher than 7.5 and that a pH of 6.2 was optimum for RL production. Another study also showed that RL concentration decreased and reached its lowest point at a pH of 8 (Moussa et al. 2014). In contrast to these reports, in this study maximum RL production reached a maximum at an initial pH of 8. This result is in agreement with our previous study carried out on the parent isolate P6, where optimum pH for maximum RL production was also found to be 7.5 (slightly alkaline) (El-Housseiny et al. 2016). This suggests that optimum pH for maximum RL production is bacterial strain dependant and that the bacterial isolate used in this study was highly sensitive to pH for RL production. The commercial application of this powerful biosurfactant may thus be enhanced by reducing its production costs through increasing its yield using RSM. Since major changes have been made in the fermentation conditions for RL production, the time course profile was repeated, using the optimized conditions reached (a 50:50 mixture of sugarcane bagasse and sunflower seed meal, an impregnating solution of 20 ml containing 5%(v/v) glycerol, inoculum size 1%v/v, pH 8 and incubation temperature of 30 °C). Again, maximum RL production was obtained after 10 days of incubation, however, the RL level this time was about 1.5 fold higher than results obtained in the previous time course study, reaching 46.85 g/l-IS. The obtained value was also about 5.5 folds higher than that obtained using SLF carried out in this study. Moreover, this RL level was obtained in a shorter time than most studies reported so far. In 2008, a maximum RL production of 46 g/l-IS (172 g/kg-IDS) by P. aeruginosa UFPEDA614 in SSF using a 50:50 mixture of sugarcane bagasse and sunflower seed meal with 37.5 ml of impregnating solution containing 10% v/v glycerol after 12 days was reported (Camilios-Neto et al. 2008). Although our RL yield is comparable with this value when expressed in terms of g/l-IS, our yield expressed in terms of g/kg of solid substrate showed lower values. This may be explained by the smaller volume of impregnating solution used in our study. Moreover, in 2011, the highest RL production of 45 g/l-IS was obtained by the same strain after 12 days of SSF using sugarcane bagasse and corn bran (1:1), and an impregnating solution of 35 ml containing 6% (v/v) of each of glycerol and soybean oil (Camilios-Neto et al. 2011). Therefore, in our study, the mutant 15GR yielded comparable RL concentrations that were reached in less time and using lower concentrations of carbon source than both these studies. Moreover, although maximum RL concentrations were obtained after 10 days of incubation, high RL levels were already achieved (41.87 g/l-IS (83.74 g/kg-IDS)) by the mutant 15GR after only 6 days of incubation, unlike Camilios-Neto et al. (2008), whose RL yields reached only about 24 g/l-IS (89 g/kg-IDS) after 6 days of incubation. In addition, in 2017, maximum RL yields of 18.7 g/l was obtained using glycerol as carbon source and rapeseed/wheat bran as matrix (Wu et al. 2017). Our study hence presents an appropriate basis for subsequent studies on RL production using SSF. In conclusion, these results showed that the application of BBD and RSM were successful in enhancing the RL production under SSF by 67% and a maximum RL concentration of 46.85 g/l-IS was obtained in the present study using a mixture of sugarcane bagasse and sunflower seed meal after only 10 days of incubation. Optimum fermentation conditions were found to be an inoculum size of 1%v/v, a temperature of 30 °C and a pH of 8. These results suggest that SSF may possibly be a feasible substitute for SLF to produce RLs since our maximum yield was comparable with values that have been obtained in SLF: 23.6 g/l (Noh et al. 2014), 32 g/l (Matsufuji et al. 1997), 36.7 g/l (Muller et al. 2011), and 46 g/l (Linhardt et al. 1989). These findings imply that RL production by P. aeruginosa 15GR in static tray bioreactors may be successful (Durand 2003). Additional concern to the application of SSF for RL production is hence justified for enhancing RL levels in laboratory-scale work even further and for moving up to pilot scale. Abdel-Hafez SM, Hathout RM, Sammour OA (2014) Towards better modeling of chitosan nanoparticles production: screening different factors and comparing two experimental designs. Int J Biol Macromol 64:334–340. https://doi.org/10.1016/j.ijbiomac.2013.11.041 Abdel-Mawgoud AM, Aboulwafa MM, Hassouna NA (2009) Characterization of rhamnolipid produced by Pseudomonas aeruginosa isolate Bs20. Appl Biochem Biotechnol 157(2):329–345. https://doi.org/10.1007/s12010-008-8285-1 Abdullah N, Ejaz N, Abdullah M, Alim-Un-Nisa Firdous S (2006) Lignocellulosic degradation in solid-state fermentation of sugar cane bagasse by Termitomyces sp. Micología Aplicada Int 18:15–19 Alberton D, Mitchell DA, Cordova J, Peralta-Zamora P, Krieger N (2010) Production of a fermented solid containing lipases of Rhizopus microsporus and its application in the pre-hydrolysis of a high-fat dairy waste water. Food Technol Biotechnol 48(1):28–35. https://www.ftb.com.hr/images/pdfarticles/2010/January-March/48-28.pdf Benincasa M, Accorsini FBR (2008) Pseudomonas aeruginosa LBI production as an integrated process using the wastes from sunflower-oil refining as a substrate. Bioresour Technol 99:3843–3849. https://doi.org/10.1016/j.biortech.2007.06.048 Bodour AA, Drees KP, Maier RM (2003) Distribution of biosurfactant-producing bacteria in undisturbed and contaminated arid Southwestern soils. Appl Environ Microbiol 69:3280–3287. https://doi.org/10.1128/AEM.69.6.3280-3287.2003 Camilios-Neto D, Meira JA, de Araújo JM, Mitchell DA, Krieger N (2008) Optimization of the production of rhamnolipids by Pseudomonas aeruginosa UFPEDA 614 in solid-state culture. Appl Microbiol Biotechnol 81:441–448. https://doi.org/10.1007/s00253-008-1663-3 Camilios-Neto D, Bugay C, de Santana-Filho AP, Joslin T, de Souza LM, Sassaki GL, Mitchell DA, Krieger N (2011) Production of rhamnolipids in solid-state cultivation using a mixture of sugarcane bagasse and corn bran supplemented with glycerol and soybean oil. Appl Microbiol Biotechnol 89:1395–1403. https://doi.org/10.1007/s00253-010-2987-3 Chandrasekaran EV, BeMiller JN (1980) Constituent Analysis of Glycosaminoglycans. In: Whistler RL (ed) Methods in carbohydrate chemistry. Academic Press, New York, pp 89–96 Chen SY, Lu WB, Wei YH, Chen WM, Chang JS (2007) Improved production of biosurfactant with newly isolated Pseudomonas aeruginosa S2. Biotechnol Prog 23:661–666. https://doi.org/10.1021/bp0700152 Chen J, Huang PT, Zhang KY, Ding FR (2012) Isolation of biosurfactant producers, optimization and properties of biosurfactant produced by Acinetobacter sp. from petroleum-contaminated soil. J Appl Microbiol 112:660–671. https://doi.org/10.1111/j.1365-2672.2012.05242.x Costa SGVAO, Nitschke M, Haddad R, Eberlin MN, Contiero J (2006) Production of Pseudomonas aeruginosa LBI rhamnolipids following growth on Brazilian native oils. Process Biochem 41:483–488. https://doi.org/10.1016/j.procbio.2005.07.002 Daoshan L, Shouliang L, Liu Y, Demin W (2004) The effect of biosurfactant on the interfacial tension and adsorption loss of surfactant in ASP flooding. Colloids Surf A Physicochem Eng Asp. 244:53–60 Durand A (2003) Bioreactor designs for solid state fermentation. Biochem Eng J 13:113–125 El-Housseiny GS, Aboulwafa MM, Aboshanab KA, Hassouna NA (2016) Optimization of rhamnolipid production by P. aeruginosa isolate P6. J Surfactants Deterg 19(5):943–955. https://doi.org/10.1007/s11743-016-1845-4 El-Housseiny GS, Aboshanab KA, Aboulwafa MM, Hassouna NA (2017) Isolation, screening and improvement of rhamnolipid production by Pseudomonas isolates. Indian J Biotechnol 16(4):611–619 Ghribi D, Abdelkefi-Mesrati L, Mnif I, Kammoun R, Ayadi I, Saadaoui I, Maktouf S, Chaabouni-Ellouze S (2012) Investigation of antimicrobial activity and statistical optimization of Bacillus subtilis SPB1 biosurfactant production in solid-state fermentation. J Biomed Biotechnol. https://doi.org/10.1155/2012/373682 Kashyap P, Sabu A, Pandey A, Szakacs G, Soccol CR (2002) Extra-cellular l-glutaminase production by Zygosaccharomyces rouxii under solid-state fermentation. Process Biochem 38:307–312. https://doi.org/10.1016/S0032-9592(02)00060-2 Koch AK, Käppeli O, Fiechter A, Reiser J (1991) Hydrocarbon assimilation and biosurfactant production in Pseudomonas aeruginosa mutants. J Bacteriol 173:4212–4219 Kosaric N, Vardar-Sukan F (2014) Biosurfactants: production and utilization-processes, technologies, and economics. CRC Press, Boca Raton Linhardt RJ, Bakhit R, Daniels L, Mayerl F, Pickenhagen W (1989) Microbially produced rhamnolipid as a source of rhamnose. Biotechnol Bioeng 33:365–368. https://doi.org/10.1002/bit.260330316 Marasini N, Yan YD, Poudel BK, Choi HG, Yong CS, Kim JO (2012) Development and optimization of self-nanoemulsifying drug delivery system with enhanced bioavailability by Box–Behnken design and desirability function. J Pharm Sci 101:4584–4596. https://doi.org/10.1002/jps.23333 Matsufuji M, Nakata K, Yoshimoto A (1997) High production of rhamnolipids by Pseudomonas aeruginosa growing on ethanol. Biotechnol Lett 19:1213–1215. https://doi.org/10.1023/A:1018489905076 Moussa TAA, Mohamed MS, Samak N (2014) Production and characterization of di-rhamnolipid produced by Pseudomonas aeruginosa TMN. Braz J Chem Eng 31:867–880. https://doi.org/10.1590/0104-6632.20140314s00002473 Muller MM, Hörmann B, Kugel M, Syldatk C, Hausmann R (2011) Evaluation of rhamnolipid production capacity of Pseudomonas aeruginosa PAO1 in comparison to the rhamnolipid over-producer strains DSM 7108 and DSM 2874. Appl Microbiol Biotechnol 89:585–592. https://doi.org/10.1007/s00253-010-2901-z Mulligan CN, Sharma SK, Mudhoo A (2014) Biosurfactants: research trends and applications. Taylor & Francis, New York Mussatto SI, Ballesteros LF, Martins S, Teixeira JA (2012) Use of agro-industrial wastes in solid-state fermentation processes. In: Show PK-Y (ed). Industrial waste, In Tech, pp 121–140. https://doi.org/10.5772/36310 Narendrakumar F, Saikrishna NMD, Prakash P, Preethi TV (2017) Production of rhamnolipids from P. aeruginosa by SSF method. Int J Green Pharm. https://doi.org/10.22377/ijgp.v11i02.920 Noh NA, Salleh SM, Yahya AR (2014) Enhanced rhamnolipid production by Pseudomonas aeruginosa USM-AR2 via fed-batch cultivation based on maximum substrate uptake rate. Lett Appl Microbiol 58:617–623. https://doi.org/10.1111/lam.12236 Pandey A (2003) Solid-state fermentation. Biochem Eng J 13:81–84. https://doi.org/10.1016/S1369-703X(02),00121-3 Singhania RR, Patel AK, Soccol CR, Pandey A (2009) Recent advances in solid-state fermentation. Biochem Eng J 44:13–18. https://doi.org/10.1016/j.bej.2008.10.019 Soccol CR, Vandenberghe LP, Medeiros AB, Karp SG, Buckeridge M, Ramos LP, Pitarelo AP, Ferreira-Leitão V, Gottschalk LM, Ferrara MA, da Silva Bon EP, de Moraes LM, Araújo Jde A, Torres FA (2010) Bioethanol from lignocelluloses: status and perspectives in Brazil. Bioresour Technol 101:4820–4825. Sodagari M, Ju L-K (2014) Cells were a more important foaming factor than free rhamnolipids in fermentation of Pseudomonas aeruginosa E03-40 for high rhamnolipid production. J Surfactants Deterg 17:573–582. https://doi.org/10.1007/s11743-013-1535-4 Thomas L, Larroche C, Pandey A (2013) Current developments in solid-state fermentation. Biochem Eng J 81:146–161 Trummler K, Effenberger F, Syldatk C (2003) An integrated microbial/enzymatic process for production of rhamnolipids and L-(+)-rhamnose from rapeseed oil with Pseudomonas sp. DSM 2874. Eur J Lipid Sci Technol 105:563–571. https://doi.org/10.1002/ejlt.200300816 Wei YH, Chou Chien-Liang, Chang Jo-Shu (2005) Rhamnolipid production by indigenous Pseudomonas aeruginosa J4 originating from petrochemical wastewater. Biochem Eng J 27:146–154. https://doi.org/10.1016/j.bej.2005.08.028 Winterburn JB, Martin PJ (2012) Foam mitigation and exploitation in biosurfactant production. Biotechnol Lett 34:187–195. https://doi.org/10.1007/s10529-011-0782-6 Wu J, Ju L-K (1998) Extracellular particles of polymeric material formed in n-hexadecane fermentation by Pseudomonas aeruginosa. J Biotechnol 59:193–202. https://doi.org/10.1016/S0168-1656(97),00150-8 Wu J, Zhang J, Wang P, Zhu L, Gao M, Zheng Z, Zhan X (2017) Production of rhamnolipids by semi-solid-state fermentation with Pseudomonas aeruginosa RG18 for heavy metal desorption. Bioprocess Biosyst Eng 40:1611. https://doi.org/10.1007/s00449-017-1817-8 State-Ease Handbook for experimenters: version 11.0 (https://www.statease.com/pubs/handbk_for_exp_sv.pdf) Zhu L, Yang X, Xue C, Chen Y, Qu L, Lu W (2012) Enhanced rhamnolipids production by Pseudomonas aeruginosa based on a pH stage-controlled fed-batch fermentation process. Bioresour Technol 117:208–213. https://doi.org/10.1016/j.biortech.2012.04.091 GSE performed the practical experiments incorporated in the manuscript under the supervision and guidance of KMA, MMA, and NAH. KMA and MMA have designed the protocol of this study. GSA and KMA have written the first draft of manuscript. KMA, MMA and NAH have helped in writing and revising this manuscript. All authors read and approved the final manuscript. We hereby acknowledge Department of Microbiology and Immunology, Faculty of pharmacy, Ain Shams University for providing us with all facilities and support required to perform the practical work. We would like to thank Dr. Rania M. Hathout, Assistant Professor of Pharmaceutics and Industrial Pharmacy, Faculty of Pharmacy, Ain Shams University, for her help in using Design Expert software and Dr. Ahmad M. Abdel-Mawgoud, Lecturer of Microbiology and Immunology, Faculty of Pharmacy, Ain Shams University, for his valuable advice throughout the work. All data generated or analyzed during this study are included in this published article in the main manuscript and additional supporting file. Consent to publish No funding source was received. The article is self funded by the authors. All authors shared in the design of the study, collection, analysis, and interpretation of data and in writing the manuscript. Department of Microbiology and Immunology, Faculty of Pharmacy, Ain Shams University, Organization of African Unity St., Abbassia, POB: 11566, Cairo, Egypt Ghadir S. El-Housseiny , Khaled M. Aboshanab , Mohammad M. Aboulwafa & Nadia A. Hassouna Search for Ghadir S. El-Housseiny in: Search for Khaled M. Aboshanab in: Search for Mohammad M. Aboulwafa in: Search for Nadia A. Hassouna in: Correspondence to Khaled M. Aboshanab. El-Housseiny, G.S., Aboshanab, K.M., Aboulwafa, M.M. et al. Rhamnolipid production by a gamma ray-induced Pseudomonas aeruginosa mutant under solid state fermentation. AMB Expr 9, 7 (2019) doi:10.1186/s13568-018-0732-y Response surface methodology Rhamnolipids Solid state fermentation
CommonCrawl
A proposal for bio-synchronized transmission of EEG/ERP data M. A. Lopez-Gordo1, P. Padilla1 & F. Pelayo Valle2 Acquisition of event-related potentials (ERPs) requires a nearly perfect synchronization between the stimulus player and the EEG acquisition unit that clinical systems implement at hardware level by means of a wired link. Out of clinical context, current brain-computer interface technology offers wireless and wearable EEG headsets that provide ubiquitous EEG acquisition. However, they are not adequate for ERPs acquisition since they lack the physical wire with the stimulus player. In this paper, we propose a novel technique devoted to provide a solution to this problem by means of a bio-synchronization approach. This technique adds to the stimulus data, a tagged audio preamble for synchronization (TAP-S) that embeds a synchronization mechanism in its physiological response based on pseudo random sequences. In this way, the EEG data elicited by the preamble and the stimulus are recorded together and the stimulus onset can be directly extracted from the EEG data by preamble detection. TAP-S is tailored to work with any low-cost multimedia player and wireless EEG headsets. Our preliminary results reveal TAP-S as a first, promising, and low-cost approach that, after further improvement, could enable remote processing of ERPs with wireless acquisition with application in telemedicine, ambient assisted living, or brain-computer interfaces. EEG is a measure of the brain activity caused by populations of neurons that synchronously discharge their action potentials. An event-related potential (ERP) is the electrophysiological response evoked by an external stimulus that appears as positive or negative deflections of the EEG amplitude. In practical terms, ERPs are characterized by the amplitude and the latency with temporal reference to the stimulus onset. They are denoted by a letter P (positive) or N (negative) followed by a number that denotes the latency time in milliseconds since stimulus onset (e.g., P75, N100, and P200). ERP analysis is extensively used in clinical practice because their latencies and amplitudes correlate with well-known cognitive or physiological pathologies or functional impairments (e.g., long list with more than 20 visual abnormalities that includes optic neuropathy, multiple sclerosis, brain injury, and glaucoma [1, 2]). That means that an accurate estimation of the latencies and amplitudes is needed for diagnosis based on ERP analysis. The latter justifies the need of precise synchrony between the stimulus display and the EEG acquisition unit. In clinical EEG systems, synchrony is typically implemented by means of a physical wire. Typically, the stimulation system is provided with a parallel port and specific software to run psycho-physiologic paradigms (e.g., e-Prime and Curry 7). In each trial, the software presents stimuli to the subject under test and immediately after sends onset marks out of a com port. These marks are received by the EEG acquisition unit though the hardware port (see Fig. 1). The onset mark is used to establish the beginning of the stimulus in a trial, thus setting the temporal reference to measure ERP latencies. The need of the wired link severely limits two fundamental aspects: (i) remote EEG/ERP acquisition and (ii) wireless EEG headsets. We elaborate on these two aspects in the next two paragraphs. Typical visual ERP acquisition setup composed of stimulus system and EEG acquisition system. Each time the stimulus system presents a stimulus, an onset mark is transmitted out from the parallel port to the EEG acquisition system. The onset mark demarcates the beginning of each trial Telemonitoring is a key aspect for current e-health systems (see some current studies in [3–5]). Regarding remote EEG/ERP acquisition and testing, we merely mention some studies and cases of mobile and home-based EEG services. This service is gaining adepts among clinicians and users. As a matter of fact, approximately 30 % of Dutch neurologists use EEG recordings for clinical diagnosis [6]. For instance, in [7], a telemedicine solution for remote video-EEG consultation was tried in La Rioja (Spain). Almost 99 % of patients expressed a high degree of satisfaction with the service. The authors of the paper concluded that users preferred the telemedicine service to in-hospital EEG test because it constitutes an improvement in access to this specialized medical care as well as important financial and time savings. The authors of [8] proposed a home-based polysomnography system as a cost-effective alternative for obstructive sleep apnea diagnosis. The system was equipped with a WiFi/3G interface for data and video communication via Skype. The authors of [6] compared the performance of four mobile EEG systems in recording epileptiform discharges. They concluded that most of the patients were satisfied with the service. There are more examples of home-based EEG [9–11] and medical services in general [12, 13]. However, these and other examples of home-based EEG and bio-signal testing do not apply to ERPs because of the nearly perfect synchrony needed between the stimulus player and EEG acquisition unit. Instead they register other long-term and non-related-to-events signals such as epileptic seizures or apnea episodes. In respect to wireless EEG acquisition, we mention some studies and works related to brain-computer interfaces (BCIs) [14, 15]. In the last years, the technological evolution of BCIs has caused an increase of ambulatory, mobile, and wireless EEG headsets for clinic, entertainment, ambient assisted living, and other personal uses and applications [16–19]. The latest advance consists in low-cost, wearable and dry EEG headsets with wireless transmission (see [20, 21] for reviews) that allows mobile EEG services. These devices offer synchronization by means of proprietary software and protocols but not between the stimulus player and the wireless EEG headset. They are standalone solutions that do not integrate a stimulus player. Then, they are not meant to execute ERP paradigms but are suitable for others in which stimulation is not needed (e.g., self-regulation of low frequency cerebral rhythms [22], alpha band modulation [23] or steady-state EEG responses [24]). Conversely, other wireless EEG designs combine both stimulus player and EEG headset with synchronization [25, 26]. In [22], the latter attempt, specific clinical hardware for stimulation with a specific SYN port was used (PS33-PLUS, by Grass Technologies); although in terms of usability, we would expect wireless EEG headsets to permit users to use their own stimulus players without addition of extra synchronization hardware. In summary, event-related paradigms are limited to clinical acquisition due to restrictions in synchrony. In this paper, we propose a technique, in the field of novel bio-inspired and bio-collaborative approaches [27, 28], aimed to address this limitation. Tagged audio preamble for synchronization (TAP-S) adds to the stimulation data a tagged audio preamble that embeds in its physiological response a synchronization mechanism based on pseudo random sequences. Our preliminary results reveal TAP-S as the first, promising, and low-cost approach that could enable recordings of ERPs with wireless EEG headsets and remote processing with application in telemedicine, ambient assisted living, or brain-computer interfaces. TAP-S overview The synchronization issue in ERP acquisition In medical diagnosis, ERP estimation is typically performed by synchronous trials averaging. Paradigms normally average tens or hundreds of trials to obtain high quality ERPs (e.g., a minimum of 64 and 128 are recommend for visual evoked potentials [29]). Conversely, if the stimulus onset could not be accurately determined (asynchronous averaging), this would give rise to inaccurate latencies and lower amplitudes. Figure 2 illustrates this effect. Upper plot shows a typical EEG trial in which target ERPs, namely N100 and P200, are present. Bottom plot simulates ERPs denoising after synchronous and asynchronous average of 100 trials. The smooth, black, and thick line shows the two target ERPs. The thin and dotted lines are the results of synchronous and asynchronous averaging, respectively. In the asynchronous average, the onset error was uniformly distributed in the range [0–60 ms]. Longer latencies and lower amplitudes in comparison with the synchronous denoising can be observed for N100 and P200 The main idea behind TAP-S is to provide an accurate synchronization mechanism between the stimulus player and wireless EEG headset without the need of a physical wire. This is a key aspect to develop innovative mobile EEG services. We just mention some examples in what follows. Interactive services: Fig. 3 shows messages flow in a mobile EEG service. In the scenario of Fig. 3, the stimulation server stores the stimulus content. In an online application, it streams both a preamble and stimulation down to the stimulation player. The stimulation player buffers or reproduces it online, thus evoking the corresponding brain response on the user. A wireless EEG headset acquires this brain response and uploads the EEG data to the monitoring and processing server. In an online application, this server analyzes the brain response and immediately takes a decision. This decision modifies the next stimulation to be streamed down to the subject, thus closing the interactive loop. Bio-synchronized transmission of wireless EEG/ERP data. In this scenario, the stimulation player presents the stimulus headed by an auditory preamble used in bio-synchronization. The bio-synchronized EEG data is transmitted to the monitoring and processing server. After accurate extraction of the stimulus onset from the EEG data, the server can perform synchronous averaging of trials This type of interactive service could permit online cognitive telerehabilitation, such as neuro-feedback applied to a multitude of pathologies (e.g., attention deficit hyperactivity disorder [30, 31], autism spectrum [32, 33], cerebral palsy [34], and mental impairment [35]). Notice that IP protocols offer synchronization mechanisms in all links and interfaces of Fig. 3 except in the interface between the stimulation player and the wireless EEG headset. In this link, TAP-S provides it. Ubiquitous execution of event-related paradigms: since the synchronization provided by TAP-S is embedded in the EEG data, both the online and the offline execution of event-related paradigms could be performed. As an example, Fig. 4 shows a scenario of use: a volunteer (for instance a female) in a clinical study of vision downloads her video. It is stored in her mobile devices or her stimulus player. Then, during the commuting, she plays it. The wireless EEG headset acquires the brain responses and stores the EEG data. Then, it uploads them to a remote monitoring and processing server when a broadband connection is available (for instance at office). This server detects the stimulus onset of each trial by means of TAP-S and performs ERP analysis. Ubiquitous execution of event-related paradigms. In this scenario, participants in a study of vision assessment download a video at home and store it in their mobile devices. Then, during their daily commuting, they play it. The wireless EEG headsets store the EEG data and transmit them to the monitoring and processing server at work. This server detects the stimulus onset of each trial by means of TAP-S and performs asynchronous averaging of ERPs TAP-S is implemented by encapsulating the stimulation data with a synchronization preamble (see Fig. 5). TAP-S encapsulates the multi-modal stimulus data with a header containing the auditory synchronization preamble Physical principles TAP-S is based on the reliable evocation of auditory middle-latency responses, precisely the well-known 40-Hz phenomenon [36, 37]. We explain it as follows. Tone-pips are commonly used in clinical audiology [38, 39]. They are auditory pulses containing pure tones of few milliseconds duration with rising and falling flanks. When we stimulate someone with a single auditory tone-pip, it evokes potentials that resemble 3 or 4 cycles of a 40-Hz sine wave. Consequently, when a train of auditory clicks is presented at a rate of 40 stimuli per second, these waves combine to form a sinusoidal wave of 40 Hz (see Fig. 6). There are two important reasons that justify the use of the 40-Hz ERP: (i) this potential is a reliable and time-locked response to the stimulus onset; (ii) the energy is concentrated around the 40 Hz spectral line (i.e., most of the power spectral density is within a very short spectral range). These two aspects are very important for a robust and reliable detection. The auditory 40-Hz phenomenon. The figure shows four auditory middle-latency responses to four tone-pips presented at a rate of 40 Hz (25 ms time interval). On the bottom, the overlapped response represents the coherent sum (i.e., in phase) of the four responses. The figure is adapted from [36] Preamble design In specialized literature, middle-latency auditory potentials, such as the 40-Hz phenomenon [40–42], have been evoked by means of tone-pips. Equation 1 shows an analytic expression of a tone-pip. $$ g(t)=\left\{\begin{array}{cc}\hfill \raisebox{1ex}{$t$}\!\left/ \!\raisebox{-1ex}{${t}_1$}\right. \sin \left(2\pi {f}_ct\right)\hfill & \hfill 0\le t\le {t}_1\hfill \\ {}\hfill \sin \left(2\pi {f}_ct\right)\hfill & \hfill {t}_1\le t<{t}_2\hfill \\ {}\hfill \raisebox{1ex}{$\left(t-{t}_3\right)$}\!\left/ \!\raisebox{-1ex}{$\left({t}_2-{t}_3\right)$}\right. \sin \left(2\pi {f}_ct\right)\hfill & \hfill {t}_2\le t\le {t}_3\hfill \end{array}\right\} $$ where g(t) is the tone-pip signal, t 1 and t 2 delimit the rising and falling times, f c is the pure tone frequency, and t 3 is the total length (see Section 3 for specific values used in this study). We also obtained m(t), which is a pseudo random binary sequence of maximum-length (MLS or m-seq) presented at a rate of 40 Hz (i.e., 25 ms time interval) (2) $$ m(t)={\displaystyle {\sum}_{m=0}^{M-1}{a}_m\delta \left(t-0.025m\right)} $$ where a m is the m-seq and M is the length of the sequence. Pseudo random codes have been extensively used for synchronization purposes in navigation/positioning [43, 44], radar [45], and wireless communications [46, 47] and for other purposes related to clinical practice ([48–50]), etc. In the case of m-seq, its autocorrelation function cannot be bettered by any other family of pseudo random codes. Finally, we convolved g(t) and m(t) (3), thus giving rise to an auditory signal designed to evoke the auditory 40-Hz phenomenon at the same time that permits an accurate detection based on its autocorrelation properties $$ {p}_{40\mathrm{h}\mathrm{z}}(t)=m(t)\otimes g(t) $$ where p 40hz(t) is the signal used as synchronization preamble. We detect the preamble by means of a replica-correlator detector (equivalent to a matched filter detector [45]). There are two ways to build the replica signal, either by averaging many ERPs during a previous calibration session or synthetically by software. In this study, we decided the second option, thus avoiding calibration sessions and challenging the study under a plug-and-play approach. The replica signal was the expected electrophysiological response to the stimulus signal p 40hz(t) and was synthetically built as the convolution of the m(t) with 3 cycles of a 40-Hz sinusoidal signal. We used receiver operating characteristic (ROC) curves [51] and area under the curve (AUC) to assess the discrimination capacity of TAP-S. In addition, we estimated the best operating point (BOP). Encapsulation process The encapsulation and decapsulation processes occur at the stimulation and the monitoring and processing servers, respectively (see Fig. 7). The processes are completely transparent for the stimulus player and the wireless EEG headset. The stimulus player reproduces the whole message (preamble and stimulation data), and the preamble evokes the auditory 40-Hz phenomenon. Then, the wireless EEG headset uploads to the monitoring and processing server EEG data containing the brain responses to both preamble and stimulus. The monitoring and processing server detects the synchronization preamble and removes it from the EEG data, thus obtaining the stimulus response. The only thing that the monitoring and processing server needs to perform for the detection of the preamble is to build the replica signal (see previous subsection). In turn, the replica signal is built with only three parameters of the pseudo random sequence: (i) number of taps, (ii) cyclical shift, and (iii) initiation seed. The encapsulation process. Encapsulation and decapsulation only happens at both ends, that is, in the stimulation and the monitoring and processing servers. It is transparent for both the stimulus player and wireless EEG headset Operation restrictions TAP-S has only one restriction for an optimal synchronization. The media player must reproduce the stimulus data stream, including the preamble, in continuous mode. Buffering or playback is permitted, as well as online or offline upload of the EEG data. The only restriction is that both preamble and stimulus must be reproduced together without gaps, that is, without interruptions once the stimulation is started. Uninterrupted reproduction is mandatory because the synchronization process requires a replica signal that matches the response to the signal p 40hz(t). In this preliminary study, we pursue (i) the assessment of preliminary results of TAP-S as the first bionic synchronization mechanism of data transmission and (ii) the analysis of its practical utility in medical diagnosis. In this experiment, we used a clinical EEG system in an isolated lab with a wired link between the stimulation and the acquisition units (similar to Fig. 1). The onset marks obtained with this configuration was considered the gold standard. Then, we tried the blind detection of the onset marks by means of TAP-S. Two healthy volunteers participated (both males, 31 and 41 years old). The experiment was undertaken in a room isolated from external disturbances. The study was meant to be completely auditory, so blind volunteers could participate if necessary (see [52] for a discussion about the controversy in auditory vs. visual BCIs). Then, participants were told to close their eyes during the whole experiment, thus facilitating concentration and avoiding muscular EEG artifacts (e.g., eyes blinking or involuntary gazing). The auditory stimulation was manually adjusted to the comfort level of each participant. They were informed of all aspects of the study and signed the informed consent. The experiment consisted of 21 trials with the same structure. First, an allocution indicated trial start after key pressing. Once the participant pressed a key, a beep announced the beginning of the trial. After the beep, the preamble was presented to the participant by means of earphones. Finally, a beep sounded and an allocution signaled the end of the trial and the preparation for the next one. The duration of each trial was around 6 s, and the inter-trial resting time was up to the participant (typically 15–25 s). The goal was to avoid cumulative effects along the trials (e.g., fatigue and lack of focus on the experiment). For the sake of usability, we configured a single active channel placed on the vertex (Cz) and referenced to the mean value of the ear lobes (see Fig. 8). These positions of the International 10-20 system [53] were chosen because they have been included in reports of successful studies of auditory event-related potentials [54–57] and because various commercial wireless EEG headsets have an electrode on or close to this position. The ground electrode was placed between the Fpz and the Fz positions. The recordings were acquired on a Synamps2 by Compumedics Neuroscan, were band-pass filtered between 1 and 100 Hz, and were sampled at a rate of 1 kHz. Top view of the International 10-20 system for electrode placement. The positions of electrodes used in this study have a gray background Preamble header The preamble was built by means of an m-seq of 8 taps (length = 28 − 1 = 255). We constructed m(t) by spacing out 25 ms each of the symbols of the m-seq. Then, m(t) was convolved with a tone-pip as described in (1) with f c equals 1 kHz and t 1 , t 2 , and t 3 equal 1, 4, and 5 ms, respectively, thus obtaining p 40hz(t), which constitutes the preamble header (see Fig. 9). The total length of the header was 6.375 s. The values used in this experiment are the typical ones used in specialized literature [38, 39]. Preambles were generated using the same m-seq. Generation of the preamble header. The upper part of this figure shows the two signals involved in the generation of the preamble, namely the m-seq (upper left plot) and the tone-pip (upper-right plot). In this illustrative example, we present a fragment of an m-seq with codes [0 1 0 0 1 1 1]. The tone-pip is generated as a 1-kHz tone of 5-ms duration with rising and falling flanks of 1 ms each. On the bottom, the plot shows the convolution of both We generated a synthetic replica of the expected EEG response to the preamble. This signal was built by convolution of m(t) with 3 cycles of a 40-Hz sinusoidal signal. The preamble was detected by means of the replica-correlator detector with inputs from the recorded EEG signal and the replica. The output of the replica-corrector was either detection of a preamble (true positive or TP) or detection of an artifact (false positive or FP). We varied the detection threshold to estimate the ROC. The BOP was chosen as the detection threshold that maximized (4) $$ 10{log}_{10}\left[{\left(\mathrm{T}\mathrm{P}\right)}^2/\left(\mathrm{T}\mathrm{P}+\mathrm{F}\mathrm{P}\right)\right] $$ where (TP + FP) is the total number of detections and TP is the number of detected preambles. In the case of trial averaging with perfect synchronous averaging, FP equals zero and TP equals the number of trials. Then, (4) would yield the maximum denoising capacity in terms of SNR. In addition, we computed the AUC, which represents the ability of TAP-S to detect preambles correctly. Some simple rules were adopted to reduce the number of FPs. We excluded the beginning and the end of the EEG signals to avoid electrical artifacts due to electrode impedance measurement, initial calibration of amplifiers, filters setting, etc. The replica-correlator detector found peaks that exceeded the threshold with a minimum separation guard between them. If two peaks were detected within the separation guard, then the one with the highest amplitude was considered as preamble detection. If the separation between peaks was longer than the separation guard, then both were considered two independent detections. The upper limit of the separation guard is the duration of the preamble, namely 6.375 s, because this is minimum time between two consecutive trials without resting time. The minimum separation time is twice the length of a pulse of 3 cycles of the 40-Hz phenomenon, namely 150 ms, because it is the width of its autocorrelation function. With these two limits, we did some preliminary trials and we estimated that a time guard of 2 s was a flexible choice that keeps balance between a low rate of false positives and a good time resolution for peak scrutiny. EEG signal preprocessing To suppress artifacts (e.g., due to muscular movement or induced by electrical glitches), any EEG amplitudes larger than 50 μV were grounded before presented to the replica-correlator detector. The output of the replica-correlator in the absence of noise is a pass-band signal (a sinc function centered at 40 Hz). Since we preferred a base-band signal to perform detection of peaks of synchrony, we used a simple envelope detector (half-wave rectifier and low-pass filter) to obtain the base-band signal. The width of the main lobe of the sinc function in the frequency domain is the inverse of the duration of the 40-Hz phenomenon, that is, 3 cycles of 40 Hz. Then, the total length corresponds to 75 ms and the inverse is 13.3 Hz. This was the 3-dB cutoff frequency of the low-pass filter. The filter consisted in a second order Butterworth filter that was executed forward and reverse to cancel phase shifts. Note that the net effect is a fourth order filter with 6 dB of loss at the original 3 dB cutoff frequency. The filtering process explained in this section is applied to the whole EEG acquisition, and the main goal is the detection of the preamble. In a clinical application, the specific processing of trials is typically specified by the clinical protocol. The plots and tables of this section are intended to reveal the main goals of this experiment, namely the assessment of preliminary results of TAP-S as bio-synchronization mechanism (Figs. 10 and 11 and Table 1) and the analysis of its practical utility in medical diagnosis (Table 2 and Fig. 12). TAP-S performance at BOP for subjects 1 and 2 (upper and bottom plots, respectively). It shows the output of the replica-correlator detector (the noisy background), the onset marks provided by the clinical EEG system (the vertical lines), the onset marks detected by TAP-S (the filled circles), and the best detection threshold corresponding to the BOP (the dotted horizontal line). In the X-axis, the length of the experiment in seconds. In the Y-axis, the output of the detector in arbitrary units ROC curves for subjects 1 and 2 (left and right plots, respectively) Table 1 Statistic at BOP Table 2 Stimulus onset (seconds) Illustrative example with TAP-S. Thin lines in the snapshots show target ERPs (N100 and P200) used as patterns for trial averaging. The two main plots show for subjects 1 and 2 (upper and bottom plots, respectively) the TAP-S-based trial averaging of the target ERPs taken as the error of synchrony as shown from the values of Table 2 (thick lines). Also, they show synchronous trial averaging (thin lines). The small snapshots on the upper corners correspond to a zoom-in of the central part of each main plot. In the case of the TAP-S, no relevant deformation can be observed from the shape of ERPs, and in the case of subject 1, a negligible delay with respect to the synchronous trial averaging. In the X-axis, time in milliseconds. In the Y-axis, amplitude of ERPs in arbitrary units Figure 10 contains relevant information about the detection performed at the BOP for subjects 1 and 2. It shows the output of the replica-correlator detector, the onset marks provided by the clinical EEG system, the onset marks detected by TAP-S, and the best detection threshold corresponding to the BOP (dotted horizontal line). Figure 11 shows the ROC curves for both subjects. The AUC was AUC1 = 0.99 and AUC2 = 0.70 for subjects 1 and 2, respectively. The BOP was considered the detection threshold that maximized (4). These maximum values for each subject, in dB units, were BOP1 = 12.6 dB and BOP2 = 7.7 dB (see Table 1). Assuming perfect synchrony, these values correspond to the maximum improvement in SNR of ERPs after trial averaging with respect to a single trial. Table 1 shows statistics of the detection at the BOP, namely the TF, FP, accuracy, and confidence intervals of a proportion at a significance level α = 0.05. Table 2 shows details about the synchrony for the 21 trials in both subjects. The first column of each subject indicates the absolute time of each trial onset referenced to the beginning of the experiment. For instance, the first trial of subject 1 initiated 90.765 s after the experiment started. This first column (real) represents the gold pattern. The second column (TAP-S) indicates the onset time yielded by TAP-S. The last column (err.) computes the difference between the first and second columns, that is, the error in synchrony. Italics indicate trials for which TAP-S was unable to detect a valid preamble. This is the case when the error was longer than 150 ms. They were considered as outliers and not taken into account for the calculation of mean and standard deviation (see bottom rows of Table 2). Based on the results of Table 2, the example of Fig. 12 illustrates the effects of trial averaging after using TAP-S and with perfect synchrony. For this purpose, synthetic ERPs, namely N100 and P200, were generated and used as patterns (thin lines in the snapshots). The two main plots of Fig. 12 show for subjects 1 and 2 (upper and bottom plots, respectively) the TAP-S-based trial averaging of the synthetic ERPs, assuming as synchrony errors as shown from the values of Table 2 (thick lines). Also, they show perfectly synchronous trial averaging (thin lines). The small snapshots on the upper-right corners correspond to a zoom-in of the central part of each main plot. The errors in synchrony with respect to the N100 and P200 components were less than 1 ms and −11 ms for subjects 1 and 2, respectively. See Section 5.2 for discussion regarding the impact of these error magnitudes. In this section, we discuss the results in terms of detection performance and medical diagnosis utility. In summary, our preliminary results with two subjects support that TAP-S is capable to provide bio-synchronization by means of the 40-Hz response. However, the dissimilar results between subject 1 and subject 2 suggest that additional testing is required before TAP-S could be considered efficient enough for clinical applications. Performance of TAP-S Figure 10 shows detection performance at BOP for both subjects. The bottom plot shows a much more noisy aspect than the upper plot. We analyzed the EEG raw signals from both subjects, and the ones corresponding to subject 2 presented many more EEG artifacts. It could be due to muscular movements or nervous behavior during the trials (in fact, this subject recognized that this was his very first experience in this type of experiments). In some technologies such BCIs, it is assumed that they do not work for everyone [58] (i.e., BCI illiteracy). It could be stated that even though BCIs do not work for everyone, evoked potential recording does. That is, auditory potentials evoked by the recommended protocol (clinical instrumentation, setup, number of electrodes and positions, stimuli intensity, number of averages, signal preprocessing, etc.) are reproducible. Our proposal does not follow any clinical protocol since we pursued a simple plug-and-play approach (e.g., only one active electrode, no calibration session, and the use in detection of a generic replica) suitable for future plug-and-play BCI applications. Another justification for the poor performance of subject 2 could be in the stimulus used for the evocation of the 40-Hz response. Under our plug-and-play approach, the design of the stimulus was the same for both subjects. However, many aspects influence the amplitude of the 40-Hz response and, hence, the SNR of the signal to be detected (e.g., the intensity and auditory threshold of each person or the optimal carrier of the tone-pip). In addition, it has been reported that the optimal peaking frequency of the 40-Hz response probably varies (in the range 35–45 Hz) from one person to the next [36]. The latter would lead to variances of the amplitude of the 40-Hz response that could justify the differences in performance between subjects 1 and 2. For sure, a calibration session would have improved the results of both subjects, but at the cost of our plug-and-play approach. Under these circumstances, entire success cannot be expected. This is the cost of the human factor. However, and despite the noisy aspect of the bottom plot, TAP-S was able to filter most of them due to the time guard defined in the methodological section (at BOP, only FP = 43 and TP = 19). It is important to consider that for a high rate in successful detection, some premises were introduced in the detection, namely the initial and end periods and the time guard. Without this information, the number of FPs would have increased. It is important to understand that this prior information only filters out FPs in the scale of seconds (e.g., separation guard was 6.375 s). This is not relevant to TAP-S and does not affect the essence of this experiment because TAP-S concerns synchronization in the millisecond scale. Synchronization in the scale of seconds between the media player and the EEG headset can be easily performed by IP protocols or even manually. Figure 11 shows the ROC curves for both subjects. A rough guide for classifying the accuracy of a test based on the AUC is the traditional academic point system [59]: 1.0 perfect; 0.90–0.99 excellent; 0.80–0.89 good; 0.70–0.79 fair; 0.60–0.69 poor; 0.50–0.60 worthless. In our study, the detection of preamble was excellent (AUC1 = 0.99) and fair (AUC2 = 0.70) for subjects 1 and 2, respectively. Then, the performance of TAP-S as preamble detector can be considered as excellent and fair for subjects 1 and 2, respectively. Table 1 shows some details about the detection performance at the BOP. The large difference in accuracy between subject 1 (acc1 = 0.95) and subject 2 (acc2 = 0.31) is justified by the noise at the output of the replica-correlator detector (Fig. 8, bottom). The maximum improvement of SNR after trial averaging is remarkable (see Table 1, last row). It must be interpreted as the denoising capacity of TAP-S in this study. The maximum theoretical capacity, considering 0 FPs and 21 TPs is 13.2 dB by direct application of (4). Then, subject 1 achieved almost the maximum (12.6 dB). We finalize this section mentioning that the scope of this experiment was not to obtain the highest performance in detection but to show evidence that synchronization in the range of milliseconds can be performed with TAP-S. As stated before, EEG is considered a non-stationary signal with large variations in inter-trials for the same subject and inter-subjects. The ROC curve showed a completely satisfactory performance for one subject. Consequently, it is more than enough to think of TAP-S as a promising technique that deserves to be optimized in future works. Usefulness of TAP-S in medical diagnosis Table 2 shows the numerical results on the detection of the onset marks. An aspect that deserves to be discussed is the large error values in the table (italics). It is obvious that the TAP-S did not work in all trials and some outliers happened. However, the gist of the discussion should orbit around the applicability of TAP-S in medical diagnosis with wireless EEG headset. The question to address are the following: (i) What would happen if some of the trials were detected with large synchronization errors? The immediate answer is that the corresponding EEG data would contain nothing but uncorrelated EEG that do not disturb the detection, except that more number of trials are needed to achieve a certain target SNR; (ii) What would be the effect of small errors around 20–50 ms? Small errors would give rise to a smoother shape but they would have no effect on the latency since negative and positive errors would cancel each other after a sufficient number of averages. Furthermore, results of subject 1 (see Table 2) show a very low number of errors in this time window (only 1 out of the 21 trials). The low number of small errors is justified because FPs are distributed in a large time window (from few milliseconds to seconds) and not only in the 20–50-ms window. Therefore, we expect a relative small number of small errors, thus having a minor impact on the shape. The next question would be, How would it affect the quality of the registered ERPs? In clinical practice, ERPs are estimated by averaging a high number of trials. Averaging of some trials with uncorrelated EEG data would just decrease the SNR of the ERPs, but the latencies and shape of the averaged ERPs would remain unaltered. It is interesting to point out that FPs with errors of several hundreds of milliseconds affect less than those with few milliseconds because the latter contain partial correlated EEG data. Figure 12 is an illustrative example of this with results reported in Table 2. Italics of the table are easily identified because, after trial averaging, they become small and positioned far from the zero lag. The snapshot of subject 1, at the upper-right corner of the main plot, shows with detail that the averaged trials yield N100 and P200 peaks with near zero error in latency with respect to the synthetized ERP. Furthermore, the envelope remains almost the same. This justifies the usability of TAP-S in medical diagnosis with wireless EEG teleservices. In this paper, we propose a novel technique that provides synchronization for wireless EEG acquisition. TAP-S adds a preamble to the stimulation data that embeds in its neuro-physiological response a synchronization mechanism based on pseudo random sequences. Despite a promising performance, still many open questions remain. For instance, the impulse response was assumed to be 3 cycles of 40 Hz. It could be optimized by individual basis with a previous calibration session. Also, the type of potential used for the preamble (the mid-latency auditory 40 Hz) could be substituted by others. Nothing prevents the use of visual stimulus instead of auditory. In fact, there are some pioneering studies about time-locking visual stimulus [60, 61]. Another fact is the design of the preamble. A sequence of 255 seemed to be a good trade-off between the detection performance and the overhead added to the stimulus. In our study, this overhead was just 6.375 s, while the recommendation to obtain visual ERPs is longer than 30 s (more than one sweep, each sweep containing 64 stimuli at a rate of two stimuli per second [29]). Furthermore, the stimulus data could be encapsulated by a header and tail, both for synchronization purposes, thus augmenting detection performance at the cost of overhead increasing. To the best of our knowledge, this is the first attempt to provide a synchronization mechanism based on brain signals used in wireless ERP acquisition. Our very first design remarkably worked with one subject. This is the most relevant contribution of this work and it proves that TAP-S is a feasible approach that deserves to be improved. After further improvements, the level of synchronization provided by TAP-S would let remote assessment of ERPs and online analysis, thus opening the door to interactive mobile EEG applications. MA Ahluwalia, SD Vold, Visual evoked potentials and glaucoma. Glaucoma Today. 39–40 (2014). http://glaucomatoday.com/2014/10/. A. Van Den Bruel, J. Gailly, F. Hulstaert, S. Devriese, M. Eyssen, The value of EEG and evoked potentials in clinical practice KCE reports 109C, (2009) S Tennina, M Di Renzo, E Kartsakli, F Graziosi, AS Lalos, A Antonopoulos, PV Mekikis, L Alonso, WSN4QoL: a WSN-oriented healthcare system architecture. Int. J. Distrib. Sens. Netw. 2014, 1–16 (2014) E Kartsakli, A Antonopoulos, A Lalos, S Tennina, M Renzo, L Alonso, C Verikoukis, Reliable MAC design for ambient assisted living: moving the coordination to the cloud. IEEE Commun. Mag. 53, 78–86 (2015) E Kartsakli, A Lalos, A Antonopoulos, S Tennina, M Renzo, L Alonso, C Verikoukis, A survey on M2M systems for mHealth: a wireless communications perspective. Sensors. 14, 18009–18052 (2014) J Askamp, MJAM van Putten, Mobile EEG in epilepsy. Int. J. Psychophysiol. 91, 30–35 (2014) C Campos, E Caudevilla, A Alesanco, N Lasierra, O Martinez, J Fernández, J García, Setting up a telemedicine service for remote real-time video-EEG consultation in La Rioja (Spain). Int. J. Med. Inf. 81, 404–414 (2012) M Bruyneel, S Van den Broecke, W Libert, V Ninane, Real-time attended home-polysomnography with telematic data transmission. Int. J. Med. Inf. 82, 696–701 (2013) F Brunnhuber, D Amin, Y Nguyen, S Goyal, MP Richardson, Development, evaluation and implementation of video-EEG telemetry at home. Seizure. 23, 338–343 (2014) E Goodwin, RH Kandler, JJP Alix, The value of home video with ambulatory EEG: a prospective service review. Seizure. 23, 480–482 (2014) JJP Alix, RH Kandler, SR Mordekar, The value of long term EEG monitoring in children: a comparison of ambulatory EEG and video telemetry. Seizure. 23, 662–665 (2014) N Patwari, J Wilson, S Ananthanarayanan, SK Kasera, DR Westenskow, Monitoring breathing via signal strength in wireless networks. IEEE Trans. Mob. Comput. 13, 1774–1786 (2014) L Guo, C Zhang, J Sun, Y Fang, A privacy-preserving attribute-based authentication system for mobile health networks. IEEE Trans. Mob. Comput. 13, 1927–1941 (2014) N Birbaumer, N Ghanayim, T Hinterberger, I Iversen, B Kotchoubey, A Kübler, J Perelmouter, E Taub, H Flor, A spelling device for the paralysed. Nature. 398, 297–298 (1999) MA Lopez-Gordo, F Pelayo, A binary phase-shift keying receiver for the detection of attention to human speech. Int. J. Neural Syst 23(4), 130418190845004 (2013) A Nijholt, DP-O Bos, B Reuderink, Turning shortcomings into challenges: brain–computer interfaces for games. Entertain. Comput. 1, 85–94 (2009) L-D Liao, C-Y Chen, I-J Wang, S-F Chen, S-Y Li, B-W Chen, J-Y Chang, C-T Lin, Gaming control using a wearable and wireless EEG-based brain-computer interface device with novel dry foam-based sensors. J. NeuroEngineering Rehabil. 9, 5 (2012) P Chang, KS Hashemi, MC Walker, A novel telemetry system for recording EEG in small animals. J. Neurosci. Methods. 201, 106–115 (2011) R Matthews, PJ Turner, NJ McDonald, K Ermolaev, TM Manus, RA Shelby, M Steindorf (2008). Real time workload classification from an ambulatory wireless EEG system using hybrid EEG electrodes. (30th Annual International IEEE EMBS Conference Vancouver, British Columbia, Canada, 2008), p. 5871–75 S. Lee, Y. Shin, S. Woo, K. Kim, H.-N. Lee, Review of Wireless Brain-Computer Interface Systems. In: R. Fazel-Rezai (ed.) Brain-Computer Interface Systems - Recent Progress and Future Prospects. InTech (2013). http://www.intechopen.com/books/brain-computer-interface-systems-recent-progress-and-future-prospects. M Lopez-Gordo, D Morillo, F Pelayo, Dry EEG electrodes. Sensors 14, 12847–12870 (2014) IH Iversen, N Ghanayim, A Kübler, N Neumann, N Birbaumer, J Kaiser, A brain–computer interface tool to assess cognitive functions in completely paralyzed patients with amyotrophic lateral sclerosis. Clin. Neurophysiol. 119, 2214–2223 (2008) M van Gerven, O Jensen, Attention modulations of posterior alpha as a control signal for two-dimensional brain–computer interfaces. J. Neurosci. Methods. 179, 78–84 (2009) MA Lopez-Gordo, A Prieto, F Pelayo, C Morillas, Customized stimulation enhances performance of independent binary SSVEP-BCIs. Clin. Neurophysiol. 122, 128–133 (2011) J Thie, A Klistorner, SL Graham, Biomedical signal acquisition with streaming wireless communication for recording evoked potentials. Doc. Ophthalmol. 125, 149–159 (2012) NA Badcock, KA Preece, B de Wit, K Glenn, N Fieder, J Thie, G McArthur, Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children. PeerJ. 3, e907 (2015) M Caleffi, IF Akyildiz, L Paura, On the solution of the Steiner tree NP-hard problem via Physarum BioNetwork. IEEEACM Trans. Netw. 23, 1092–1106 (2015) GE Santagati, T Melodia, L Galluccio, S Palazzo, Medium access control and rate adaptation for ultrasonic intrabody sensor networks. IEEEACM Trans. Netw. 23, 1121–1134 (2015) JV Odom, M Bach, M Brigell, GE Holder, DL McCulloch, AP Tormene, Vaegan: ISCEV standard for clinical visual evoked potentials (2009 update). Doc. Ophthalmol. 120, 111–119 (2010) H Gevensleben, B Holl, B Albrecht, C Vogel, D Schlamp, O Kratz, P Studer, A Rothenberger, GH Moll, H Heinrich, Is neurofeedback an efficacious treatment for ADHD? A randomised controlled clinical trial. J. Child Psychol. Psychiatry. 50, 780–789 (2009) H Heinrich, H Gevensleben, U Strehl, Annotation: neurofeedback ? Train your brain to train behaviour. J. Child Psychol. Psychiatry. 48, 3–16 (2007) M Kouijzer, J Demoor, B Gerrits, M Congedo, H Vanschie, Neurofeedback improves executive functioning in children with autism spectrum disorders. Res. Autism Spectr. Disord. 3, 145–162 (2009) MEJ Kouijzer, JMH de Moor, BJL Gerrits, JK Buitelaar, HT van Schie, Long-term effects of neurofeedback treatment in autism. Res. Autism Spectr. Disord. 3, 496–501 (2009) ME Ayers, Neurofeedback for cerebral palsy. J. Neurother. 8, 93–94 (2004) A Bachers, Neurofeedback with cerebral palsy and mental retardation: a case report. J. Neurother. 8, 95–96 (2004) R Galambos, S Makeig, PJ Talmachoff, A 40-Hz auditory potential recorded from the human scalp. Proc. Natl. Acad. Sci. 78, 2643–2647 (1981) E Başar, B Rosen, C Başar-Eroglu, F Greitschus, The associations between 40 Hz-EEG and the middle latency response of the auditory evoked potential. Int. J. Neurosci. 33, 103–117 (1987) CD Bauch, DE Rose, SG Harner, Brainstem responses to tone pip and click stimuli. Ear Hear. 1, 181–184 (1980) W Szyfter, R Dauman, RC de Sauvage, 40 Hz middle latency responses to low frequency tone pips in normally hearing adults. J. Otolaryngol. 13, 275–280 (1984) Z-M Xu, E De Vel, B Vinck, PB van Cauwenberge, Choice of a tone-pip envelope for frequency-specific threshold evaluations by means of the middle-latency response: normally hearing subjects and slope of sensorineural hearing loss. Auris. Nasus. Larynx. 24, 333–340 (1997) C Borgmann, B Roß, R Draganova, C Pantev, Human auditory middle latency responses: influence of stimulus type and intensity. Hear. Res. 158, 57–64 (2001) DL Woods, C Alain, D Covarrubias, O Zaidel, Middle latency auditory evoked potentials to tones of different frequency. Hear. Res. 85, 69–75 (1995) R. Acharya, Navigation Signals. In: Understanding Satellite Navigation. pp. 83–153. Elsevier (2014). http://dx.doi.org/10.1016/B978-0-12-799949-4.00004-X. A.L. Swindlehurst, B.D. Jeffs, Seco-Granados, G., Li, J.: Applications of Array Signal Processing. In: Academic Press Library in Signal Processing. pp. 859–953. Elsevier (2014). http://dx.doi.org/10.1016/B978-0-12-411597-2.00020-5. BR Mahafza, Radar Systems Analysis and Design Using Matlab (Chapman & Hall/CRC, Boca Raton, 2000). https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjz2LjnwP_KAhVB0BoKHc4mC6AQFggdMAA&url=http%3A%2F%2Fstaff.on.br%2Fpuxiu%2FMatLab_Pack%2FRadar%2520Systems%2520Analysis%2520and%2520Design%2520Using%2520MatLab%2520-%2520Mahafza%2520Bassem%2520R.pdf&usg=AFQjCNEJPGjDCyr8wuVJwjxa0GCAETMicA. M. Parker, CDMA Wireless Communications. In: Digital Signal Processing 101. pp. 151–167. Elsevier (2010) M. Ghogho, P. Ciblat, A. Swami, Synchronization. In: Academic Press Library in Signal Processing. pp. 9–94. Elsevier (2014). http://dx.doi.org/10.1016/B978-0-12-396500-4.00002-8. MA Lopez-Gordo, DS Morillo, MAJ Van Gerven, Spreading codes enables the blind estimation of the hemodynamic response with short-events sequences. Int. J. Neural Syst 25(1), 141110180102006 (2014) GT Buračas, GM Boynton, Efficient design of event-related fMRI experiments using M-sequences. NeuroImage. 16, 801–813 (2002) EE Sutter, Imaging visual function with the multifocal m-sequence technique. Vision Res. 41, 1241–1255 (2001) T Fawcett, An introduction to ROC analysis. Pattern Recognit. Lett. 27, 861–874 (2006) MA Lopez-Gordo, R Ron-Angevin, F Pelayo Valle, Auditory Brain-Computer Interfaces for Complete Locked-In Patients, in Advances in Computational Intelligence, ed. by J Cabestany, I Rojas, G Joya (Springer Berlin Heidelberg, Berlin, Heidelberg, 2011), pp. 378–385 H Jasper, Report of the committee on methods of clinical examination in electroencephalography. Electroencephalogr. Clin. Neurophysiol. 10, 370–375 (1958) SA Hillyard, RF Hink, VL Schwent, TW Picton, Electrical signs of selective attention in the human brain. Science. 182, 177–180 (1973) KA Yurgil, EJ Golob, Neural activity before and after conscious perception in dichotic listening. Neuropsychologia. 48, 2952–2958 (2010) MA Lopez-Gordo, F Pelayo, A Prieto, E Fernandez, An auditory brain-computer interface with accuracy prediction. Int. J. Neural Syst. 22, 1–14 (2012) MA Lopez-Gordo, E Fernandez, S Romero, F Pelayo, A Prieto, An auditory brain–computer interface evoked by natural speech. J. Neural Eng. 9, 1–9 (2012) C Guger, S Daban, E Sellers, C Holzner, G Krausz, R Carabalona, F Gramatica, G Edlinger, How many people are able to control a P300-based brain–computer interface (BCI)? Neurosci. Lett. 462, 94–98 (2009) IG Duncan, Healthcare Risk Adjustment and Predictive Modeling (ACTEX Publications, Winsted, Conn, 2011) MA Lopez-Gordo, A Prieto, F Pelayo, C Morillas, Use of phase in brain–computer interfaces based on steady-state visual evoked potentials. Neural Process. Lett. 32, 1–9 (2010) M.A. Lopez-Gordo, F. Pelayo, A. Prieto, A high performance SSVEP-BCI without gazing. In: The 2010 International Joint Conference on Neural Networks (IJCNN) Barcelona, Spain (2010), pp. 193–197. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5596325&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5596325 This work was supported and co-financed by Nicolo Association for the R&D in Neurotechnologies for disability, the research project P11-TIC-7983, Junta of Andalucia (Spain), the Spanish National Grant TIN2015-67020, co-financed by the European Regional Development Fund (ERDF) and the CASIP research group TIC-117. We thank the Bio-engineering Institute of the University of Miguel Hernández of Elche (Spain), where part of this study was undertaken as a research stay. Department of Signal Theory, Communications and Networking, University of Granada, Granada, Spain M. A. Lopez-Gordo & P. Padilla Department of Architecture and Computers Technology, University of Granada, Granada, Spain F. Pelayo Valle Search for M. A. Lopez-Gordo in: Search for P. Padilla in: Search for F. Pelayo Valle in: Correspondence to M. A. Lopez-Gordo. Lopez-Gordo, M.A., Padilla, P. & Pelayo Valle, F. A proposal for bio-synchronized transmission of EEG/ERP data. J Wireless Com Network 2016, 54 (2016). https://doi.org/10.1186/s13638-016-0550-3 Wireless EEG Brain-computer interfaces Bio-synchronization Revolutionizing mHealth through next generation communication technologies
CommonCrawl
Is there a variant of Hochschild homology? Let's say we're looking at $d_n: A^{\otimes n} \rightarrow A^{\otimes n -1}$ in the Hochschild homology chain complex defined by $d_n(a_0 \otimes a_1 \otimes \cdots \otimes a_{n+1}) = \sum_{i=0}^n (-1)^i a_0 \otimes \cdots \otimes a_ia_{i+1} \otimes \cdots \otimes a_{n+1}$. Is there a similarly defined chain complex where the differential does something which combines $3$ of the tensor components for example? (I'm interested in constructions that work for combining any $n$). So the differential would yield some linear combination of elements that look roughly like this $a_0 \otimes a_ia_{i+1}a_{i+2} \otimes \cdots \otimes a_{n+1}$. Obviously this would also require a change in the chain groups as well. abstract-algebra homology-cohomology homological-algebra EgoKillaEgoKilla $\begingroup$ Maybe there is a Hochschild homology for $A_{\infty}$-algebras? But before you can have a variant of HH that uses $n$-ary operations, you need to have some sort of $n$-ary operations. Any natural operation on associative algebras is built using a binary operation that is subject to a compound ternary operation vanishing, which if you're thinking homologically leads you to viewing multiplication as being like a cocycle. It might be more fruitful to look for things that can be interpenetrated similarly. $\endgroup$ – Aaron $\begingroup$ Could you elaborate on what you mean by "any natural operation on associative algebras is built using a binary operation that is subject to a compound ternary operation vanishing"? $\endgroup$ – EgoKilla $\begingroup$ An associative algebra over $k$ is a $k$-module $A$ with a bilinear operation $m:A\otimes A\to A$ such that $m(m(a,b),c)-m(a,m(b,c))$ vanishes for all $a,b,c$. This is just a way to rewrite associativity. The fact that you have a differential at the far end of the hochschild complex is exactly the statement that you have an associative algebra. The fact that it lifts to a differential in higher degrees is perhaps surprising, or perhaps not, but you only have one basic operation in an associative algebra (outside of the $k$-linear structure) and it is built from one relation. $\endgroup$ Yes, there are variants of this as mentioned in the comments. If $A$ is instead an $A_\infty$-algebra, then the Hochschild boundary on $\hom(BA,A)$ takes a more involved form. To be brief, an $A_\infty$-algebra structure on $A$ is the datum of a degree $-1$ coderivation $BA\to BA$, which in fact corresponds to a map $d : BA\to sA$. Then the Hochschild boundary is obtained by taking the bracket of coderivations with $d$. This means, for example, that you will have elements of the form $$ f(x_1,m_3(x_2,x_3,x_4),x_5)$$ in the differential where $m_3$ is the component of $d$ corresponding to the map $(sA)^{\otimes 3}\to sA$. Pedro Tamaroff♦Pedro Tamaroff $\begingroup$ (If you do cohomology of other types of algebras, more complicated formulas may appear. But that's kind of a longer story.) $\endgroup$ – Pedro Tamaroff ♦ Not the answer you're looking for? Browse other questions tagged abstract-algebra homology-cohomology homological-algebra or ask your own question. Definition of Hochschild homology in terms of Tor functor (bar resolutions) Hochschild (co)homology and derived functors In what sense is the Homology a comonoidal functor Homology of free loop space and Hochschild cohomology Computing Hochschild cohomology of an algebra in GAP Cyclic Homology of a mixed complex as derived tensor product.
CommonCrawl
So far we have assumed that we could find a power series representation for functions. However, some functions are not equal to their Taylor series, i.e. are not analytic. How can we tell which are and which aren't? We note that $f(x)$ is equal to it's Taylor series if $\displaystyle\lim_{n\to\infty}T_n(x)=f(x)$, i.e. the series converges to the limit of its partial sums. We define the remainder of the series by $R_n(x)$, with $R_n(x)=f(x)-T_n(x)$. Then $f(x)=T_n(x)+R_n(x)$. We can see by this that a function is equal to its Taylor series if its remainder converges to 0; i.e., if a function $f$ can be differentiated infinitely many times, and $$ \lim_{n\to\infty}R_n(x)=0, $$ then $f$ is equal to its Taylor series. We have some theorems to help determine if this remainder converges to zero, by finding a formula and a bound for $R_n(x)$. (Remainder) Theorem: Let $f(x)=T_n(x)+R_n(x)$. If $f^{(n+1)}$ is continuous on an open interval $I$ that contains $a$ and $x$, then $$R_n(x)=\frac{f^{(n+1)}(z)}{(n+1)!}(x-a)^{n+1}$$for some $z$ between $a$ and $x$. Taylor's Inequality: If the $(n+1)$st derivative of $f$ is bounded by $M$ on an interval of radius $d$ around $x=a$, then $$\lvert R_n(x)\rvert\le\frac{M}{(n+1)!}(x-a)^{n+1}.$$ From this inequality, we can determine that the remainders for $e^x$ and $\sin(x)$, for example, go to zero as $k \to \infty$ (see the video below), so these functions are analytic and are equal to their Taylor series. The Remainder Theorem is similar to Rolle's theorem and the Mean Value Theorem, both of which involve a mystery point between $a$ and $b$. The proof of Taylor's theorem involves repeated application of Rolle's theorem, as is explained in this video.
CommonCrawl
Research article | Open | Open Peer Review | Published: 05 January 2019 Development and validation of a risk score to predict mortality during TB treatment in patients with TB-diabetes comorbidity Duc T. Nguyen1 & Edward A. Graviss1 BMC Infectious Diseasesvolume 19, Article number: 10 (2019) | Download Citation Making an accurate prognosis for mortality during tuberculosis (TB) treatment in TB-diabetes (TB-DM) comorbid patients remains a challenge for health professionals, especially in low TB prevalent populations, due to the lack of a standardized prognostic model. Using de-identified data from TB-DM patients from Texas, who received TB treatment had a treatment outcome of completed treatment or died before completion, reported to the National TB Surveillance System from January 2010–December 2016, we developed and internally validated a mortality scoring system, based on the regression coefficients. Of 1227 included TB-DM patients, 112 (9.1%) died during treatment. The score used nine characteristics routinely collected by most TB programs. Patients were divided into three groups based on their score: low-risk (< 12 points), medium-risk (12–21 points) and high-risk (≥22 points). The model had good performance (with an area under the receiver operating characteristic (ROC) curve of 0.83 in development and 0.82 in validation), and good calibration. A practical mobile calculator app was also created (https://oaa.app.link/Isqia5rN6K). Using demographic and clinical characteristics which are available from most TB programs at the patient's initial visits, our simple scoring system had good performance and may be a practical clinical tool for TB health professionals in identifying TB-DM comorbid patients with a high mortality risk. The effect of diabetes mellitus (DM) on the development and poor outcome of tuberculosis (TB) disease has been recognized for over a century [1]. While diabetes ranked 7th among the leading causes of death in 2015, TB has been recognized as a leading cause of the mortality due to an infectious disease [2, 3]. With the global increase of obesity and type 2 diabetes, the combination of diabetes and tuberculosis (TB-DM) has posed an imminent public health threat and a challenge to TB control programs worldwide [4]. In the United State (US), the prevalence of diabetes has consistently increased from 0.93% in 1958 to 7.40% in 2015 with an estimate of 30.3 million people of all ages (9.4% of the US population) living with diabetes [5, 6]. This increasing trend of diabetes morbidity in the US is concerning especially in US states (such as Texas) where both the TB and the DM prevalence are higher than the national average [7, 8]. Given that TB-DM comorbid patients may have a mortality of 2–5 times higher than that of non-diabetic TB patients [9, 10], more effective management strategies including the development of predictive models for TB mortality are urgently needed. There are a growing number of prognostic models developed to predict mortality in patients with TB disease. Many characteristics such as older age, HIV co-infection, diabetes, alcohol abuse, malnutrition, hypoxemic respiratory failure, etc. have been identified as the risk factors for poor outcomes in TB patients [11,12,13,14,15]. However, these models were not specifically developed for patients with TB-DM comorbidity and used hospital-based data with variables that are not routinely collected by TB control programs. The populations of these models either did not include diabetic patients [11, 12] or only included a small number of TB-DM patients [13,14,15]. The lack of a standardized prognostic system specifically developed for TB-DM patients poses a challenge for health care providers attempting to predict the risk of mortality during TB treatment in this high-risk group of patients. The present study aimed to develop and internally validate a prognostic scoring system using surveillance data with covariates which are routinely collected by most TB control program and available at the patient's initial visits for TB evaluation. This simple scoring system would be a practical tool helping quickly identify TB-DM patients having a high risk of death during TB treatment. This retrospective cohort study used the de-identified data of all confirmed TB patients from the state of Texas reported to the Centers for Disease Control (CDC)'s National TB Surveillance System (NTSS) between January 2010 through December 2016 (both genotyped and non-genotyped), who satisfied the following inclusion criteria: (1) met the clinical case definition or was laboratory confirmed based on the CDC definition for a TB case [16]; (2) received TB treatment and had a documented outcome of either "completed" or "died". Patients having treatment outcomes other than "completed" or "died" (such as "adverse", "lost", "moved", "other", "refused", or "unknown") were excluded from the analyses. Logistic regression modeling was used to determine prognostic factors associated with patient mortality. Variables with a p-value < 0.2 in the univariate analysis or considered as clinically significant were evaluated further in the multiple logistic regression. The variable section for the multiple logistic regression model was conducted according the Bayesian Modeling Averaging (BMA) method [17, 18]. As our goal was to develop a model that could be used in the patient's initial visit when the Mtb biological confirmation is still not available, Mtb culture and genotype-related variables were not evaluated in the multivariable modeling. Model discrimination was determined by the area under the Receiver Operating Characteristic (ROC) curve (AUC). The best model was chosen based on the smallest Bayesian information criterion and highest AUC. The model's good calibration was determined by a non-significant Hosmer-Lemeshow's goodness of fit test. Significant risk factors were assigned weighted-points that were proportional to their β regression coefficient values. A prognostic score was calculated for each individual patient in the cohort. The methodology of categorizing risk groups has been described elsewhere [19, 20]. Briefly, the patients were categorized in three distinct groups of mortality risk: low (< 10% mortality), medium (10–20% mortality), and high risk (> 20% mortality). Internal validation was conducted using the bootstrap resampling method with 2000 replications. Model calibration was evaluated by the Hosmer-Lemeshow goodness-of-fit test. A non-significant p-value of the Hosmer-Lemeshow goodness-of-fit test indicates the model has a good calibration (predictive accuracy). The comparison of the AUC between models was conducted using the chi-square test. All analyses were performed with Stata MP14.2 (StataCorp LLC, College Station, TX). A p value < 0.05 was considered statistically significant. Between January 2010 and December 2016, 1400 TB-DM patients in Texas were reported in the National TB Surveillance System database. After excluding 173 patients who had an outcome other than "completed" or "died", 1227 TB-DM patients, who started the TB treatment and had a treatment outcome of completed treatment or died before completion, were included in the analysis, of whom 1115 completed TB treatment and 112 died (Fig. 1). Except for injecting-drug user (IDU) (p = 0.01), no other difference was found between the patients who were included in the analysis and those who were excluded (Additional file 1: Table S1). Flowchart of the study population. NTSS: National Tuberculosis Surveillance System The crude and adjusted associations between characteristics and mortality are presented in Table 1. Nine variables (age ≥ 65 years, being US-born, being homeless, IDU, having chronic kidney disease, TB meningitis, miliary TB, positive acid-fast bacilli (AFB) smear, and positive HIV status) were significant in the multiple logistic regression model and were included in the risk score development. The weighted points of risk factors were calculated using the linear transformation of the corresponding β coefficient (Table 2) [21]. A risk score was calculated for individual patients using the following formula: $$ {\displaystyle \begin{array}{c}\mathrm{Risk}\ \mathrm{score}={16}^{\ast}\left[\mathrm{Age}\ge 65\right]+{5}^{\ast}\left[\mathrm{US}-\mathrm{born}\right]+{11}^{\ast}\left[\mathrm{Homeless}\right]+{20}^{\ast}\left[\mathrm{IDU}\right]+{20}^{\ast}\left[\mathrm{Chronic}\ \mathrm{kidney}\ \mathrm{failure}\right]\\ {}+{20}^{\ast}\left[\mathrm{TB}\ \mathrm{meningitis}\right]+{13}^{\ast}\left[\mathrm{Miliary}\ \mathrm{TB}\right]+{6}^{\ast}\left[\mathrm{AFB}\left(+\right)\ \mathrm{smear}\right]+{24}^{\ast}\left[\mathrm{Positive}\ \mathrm{HIV}\right].\end{array}} $$ Table 1 Characteristics associated with mortality during tuberculosis treatment Table 2 Weighted score assignment There were 776 (63.7%) low-risk, 233 (19.2%) medium risk and 208 (17.1%) high-risk patients with the mortality by risk group of 3.1, 12.9 and 27.9%, respectively. The final model had good discrimination in both development (AUC = 0.83 95% CI 0.79, 0.87) and bootstrap validation (AUC = 0.82 95% CI 0.78, 0.87) (Table 3, Fig. 2). The model also had a good calibration with a non-significant Hosmer-Lemeshow chi-square of 4.54 (p = 0.81) and a small Brier score of 0.07 (Table 3). Patients in the medium- and high-risk groups had more than a four- and twelve-fold increased odds of mortality compared with patients in the low-risk group (Table 4). We also compared the performance of the current TB-DM specific model and that of our previously-published mortality predictive model, which included all confirmed TB patients who started TB treatment [19]. In TB-DM patients who were included in this study, the TB-DM specific model had a significantly higher discrimination power than that of its predecessor [AUC 0.83 (95% CI 0.79, 0.88) versus 0.76 (0.71, 0.82), p < 0.001] (data not shown). Table 3 Prognostic score performance in patients with complete data for multivariate model (N = 1113) Area under the ROC curve, final model. ROC: Receiver Operating Characteristic curve Table 4 Odds ratios for death, by risk group The predicted probability of death during TB treatment can be calculated based on the intercept value (− 4.004594) of the final model and corresponding β coefficients of the variables included in the risk score, the predicted probability of death during TB treatment can be calculated from the following formula: $$ {\displaystyle \begin{array}{c}\mathrm{Probability}\ \mathrm{for}\ \mathrm{death}=\exp \Big(-4.004594+{1.579789}^{\ast}\left[\mathrm{Age}\ge 65\right]+{0.4946987}^{\ast}\left[\mathrm{US}-\mathrm{born}\right]\\ {}+{1.05767}^{\ast}\left[\mathrm{Homeless}\right]+{1.980345}^{\ast}\left[\mathrm{IDU}\right]+{1.945451}^{\ast}\left[\mathrm{Chronic}\ \mathrm{kidney}\ \mathrm{failure}\right]+{1.981255}^{\ast}\left[\mathrm{TB}\ \mathrm{meningitis}\right]\\ {}+{1.332084}^{\ast}\left[\mathrm{Miliary}\ \mathrm{TB}\right]+{0.5537461}^{\ast}\left[\mathrm{AFB}-\mathrm{positive}\ \mathrm{smear}\right]+{2.404202}^{\ast}\left[\mathrm{Positive}\ \mathrm{HIV}\right]\Big).\end{array}} $$ We have created a free online application for our risk score calculator, which can be downloaded from https://oaa.app.link/Isqia5rN6K and usable on both android and iOS mobile devices (registration for a free account of OpenAsApp is required to access the calculator). The calculator app provides a risk score (in points), risk group (low, medium or high), and probability of death (%) during treatment for an individual patient. Using 7 years of TB surveillance data from the state of Texas, we have developed and internally validated a simple prognostic score to predict mortality during treatment of TB-DM patients using only nine variables, which are routinely collected by most TB control program at the patients' initial visits for TB evaluation before the availability of the Mtb culture. Having good discrimination and calibration together with the availability of a calculator mobile app, the scoring system would be a practical tool for clinicians and public health professionals to quickly identify the TB-DM patients who have a high mortality risk without waiting for the biological confirmation. Our scoring system classifies patients into three distinctive risk groups, which would be helpful for health care workers in allocating the appropriate medical support and follow-up resources. While patients of the low-risk group can be managed according to the routine protocol, TB-DM patients in the high-risk group would need more aggressive medical support. Although many of our prognostic model's components are unmodified characteristics, there are still multiple approaches that could be carried out to improve patient survival, especially patients in the high-risk group. Having better management of the plasma glucose level is among the important strategies to reduce the patient's mortality as compared with TB patients with controlled DM. TB patients with uncontrolled DM have more than 4 times the odds for death and 2 times the odds of non-conversion of sputum cultures after 2 months of intensive treatment [22]. Education on the negative impacts of DM on TB patients as well as guidelines for changes in diet and physical activity should be provided to patients and their family so that they can be more compliant with the treatment and actively contribute to the glucose control improvement [23]. More aggressive nutritional support would be necessary for high-risk patients who are residents of long-term care facilities as these patients are also prone to other potential risk factors for TB mortality such as old age and under-nutritional condition [11]. Given that the combination antiretroviral therapy (cART) could reduce up to 68% TB-related deaths in TB/HIV co-infected patients [24], early initiation of cART could be considered in high-risk patients, who are also HIV positive, although cART is recommended to be started within 8 weeks of starting TB treatment if CD4+ level ≥ 50 cells/mm3 [25]. In our scoring system, a positive HIV status, having chronic kidney failure, TB meningitis, being IDU and age ≥ 65 years are strong predictors for mortality in TB-DM patients. These findings are consistent with current literature for TB patients in general [13, 26,27,28,29]. In a previous analysis using US multiple cause-of-death (MCOD) data from 1990 through 2006, Jung et al. observed that foreign-born patients had more than twice the TB-related death rate than that of US-born patients [30]. Meanwhile, our findings suggest that US-born TB-DM patients have more than twice the odds of death compared with foreign-born patients even after controlling for older age, homelessness, IDU, alcohol abuse and HIV infection. Our finding is consistent with the results reported by Magee et al. in a more recent population-based study using the state-wide surveillance data from 2009 to 2012 in Georgia, in which a significantly higher mortality in US-born patients was found in both non-diabetic and diabetic TB patients [31]. The possible reasons leading to a higher mortality in US-born than foreign-born TB patients have been discussed elsewhere [19]. Briefly, foreign-born suspected TB patients might promptly receive the diagnosis and aggressive management than the US-born patients as foreign-birth has been recognized as a strong risk factor for TB disease by the Texas TB program [32]. Early detection of TB cases among foreign-born persons which occurs during immigration screening, contact investigation and targeted testing may also play a role in relatively lowering the mortality risk in foreign-born patients compare to US-born patients. Lastly, the significantly higher proportion of US-born TB cases in Texas compared to the national average (41.3% versus 31.4% in 2016, p < 0.001) suggests that some US-born patients may not be timely diagnosed, especially those patients who do not have a recent history of travelling to high TB burden countries [33]. The impact of DM-TB on patient mortality was reported inconsistently among studies in different populations [31, 34]. In one of our previous studies using the surveillance data of all confirmed TB patients reported from the state of Texas between 2010 and 2016, the unadjusted association between mortality during treatment and diabetes was not significant (unadjusted odds ratio [OR] 1.04; 95% confidence interval [95% CI] 0.74, 1.47; p = 0.82) [19]. However, this unadjusted OR was obtained from only half of the study sample and might underestimate the impact of diabetes on the mortality. A more recent trend analysis using the entire data of the same population suggested that TB-DM patients had a higher mortality (10.3%) than non-DM patients (7.6%, p = 0.001) with nearly a 3-fold increase in the odds of overall death (adjusted OR 2.75; 95% CI 1.40, 5.39; p = 0.003) and death during TB treatment (adjusted OR 2.43; 95% CI 1.13, 5.23; p = 0.023). Additionally, while non-diabetic TB patients had a significant decrease in the mortality from 2010 to 2016, the mortality trend in TB-DM patients is unchanged overtime and consistently higher than that of non-diabetic patient (z = 3.05, p = 0.002) [35]. Although there is an increasing number of TB mortality risk scores being developed, we are not aware of a scoring system that is specific for the TB-DM population. In the TB mortality scoring systems presented by Horita (2013) and Pefura-Yone (2017), TB-DM patients were not included [11, 12]. In the prognostic models presented by Lui (2014), Nagai (2016), and Bastos (2016), only a small number of TB-DM patients were included in the study samples (ranged from n = 74 to n = 84) [13,14,15]. As these models used hospital-based data, many variables in these models such as respiratory failure requiring oxygen, serum albumin, activity of daily living, dehydration, hypoxemic respiratory failure, orientation disturbance, etc. are not routinely collected by TB programs. Using demographic and clinical characteristics in TB-DM patients, which are available for most TB program, our TB-DM mortality model is more practical and can be used in difference health care and public health settings. Although we have previously developed a mortality risk model including all patients staring TB treatment which has been shown to have a good overall diagnostic performance (AUC 0.82 in development and 0.80 in validation) in all presentations of TB in general [19], the model's AUC decreases to 0.76 in TB-DM patients. When the variable selection was specifically calibrated for TB-DM patients, our new TB-DM specific predictive model provided a more accurate prognosis in the TB-DM population. Our risk model has several notable limitations. First, the study excluded 233 patients who died at the diagnosis or had a treatment outcome of other than "died" or "completed", which may be prone to potential misclassification bias. However, except for having a higher proportion of IDUs, the excluded patients had no other significant differences in the demographic and clinical characteristics compared with the included patients. Therefore, the misclassification bias, if any, would be minimal. Second, given our goal was using only the data routinely collected by the most TB programs, information regarding the diabetes treatment, lipid profiles and HbA1c were not evaluated in our model. Despite the lack of DM-specific variables, the model can still correctly discriminate the risk of mortality in most of the cases with an AUC of 0.83. Third, as our scoring system was developed in the US, external validation in settings of high TB burden would be necessary. Fourth, the use of surveillance data itself has some limitations. For example, certain self-reported data were obtained from interviewing TB patients which leads to a possibility that recall bias cannot be completely ruled out. Treatment failure or relapse were not well defined in the dataset. Treatment time and time to event data were not available and prevent us from performing more robust survival analyses. As our primary goal was to develop a predictive model for death during TB treatment, mortality prior to starting TB treatment has been excluded. Therefore, our findings may not reflect the overall mortality in TB-DM patients. Lastly, as Texas is one of the US states with a high TB burden, external validation in populations in different US states would also be needed. Despite the limitations, there are several strengths making our prognostic score distinct: (1) using population-based surveillance data of the entire state of Texas during 7 years; (2) including exclusively TB-DM patients with a large sample size; (3) using demographic and clinical characteristics which are routinely collected by most TB programs from the initial patient visits, our model can be used to identify the TB-DM with high mortality risk before having the biological confirmation for Mtb; (4) having good discrimination in both development and bootstrap internal validation (AUC = 0.83 and 082, respectively) and good calibration; and (5) providing a simple scoring system with the available of a mobile app for easily calculating the predicted probability of death during TB treatment. Using demographic and clinical characteristics variables which are routinely collected by most TB programs from the initial patient visits, our simple scoring system can be used without waiting for the Mtb biological result and achieves good discrimination and calibration with the internal validation. With the free calculator app compatible with android and iOS mobile devices, the score would be a practical clinical tool for TB health professionals in identifying TB-DM comorbid patients who have high mortality risk so that appropriate approaches would be implemented to improve the patient outcomes. AFB: Acid-fast bacilli AUC: Area under the Receiver Operating Characteristic curve BMA: Bayesian modeling averaging Combination antiretroviral therapy IDU: Injecting-drug user MCOD: Multiple cause-of-death NTSS: National TB Surveillance System ROC: Receiver operating characteristic TB-DM: Tuberculosis-diabetes Dooley KE, Chaisson RE. Tuberculosis and diabetes mellitus: convergence of two epidemics. Lancet Infect Dis. 2009;9(12):737–46. National Vital Statistics Reports: Deaths: Final Data for 2015. Available at https://www.cdc.gov/nchs/data/nvsr/nvsr66/nvsr66_06.pdf. Accessed on 03/06/2018. 2017, 66(6). World Health Organization (WHO): Global tuberculosis report 2017. Available at http://www.who.int/tb/publications/global_report/en/. Accessed on 03/19/2018. 2018 . Al-Rifai RH, Pearson F, Critchley JA, Abu-Raddad LJ. Association between diabetes mellitus and active tuberculosis: a systematic review and meta-analysis. PLoS One. 2017;12(11):e0187967. Centers for Disease Control and Prevention (CDC): National Diabetes Statistics Report, 2017. Estimates of Diabetes and Its Burden in the United States. Available at https://www.cdc.gov/diabetes/pdfs/data/statistics/national-diabetes-statistics-report.pdf. Accessed on 03/06/2018. Centers for Disease Control and Prevention (CDC): Long-term Trends in Diabetes. April 2017. Available at https://www.cdc.gov/diabetes/statistics/slides/long_term_trends.pdf. Accessed on 03/06/2018. Texas Department of State Health Services: Diabetes Trend Data - Texas and US. Available at https://www.dshs.texas.gov/diabetes/PDF/data/DiabetesTrendData2017.pdf. Accessed on 03/01/2018. Restrepo BI, Camerlin AJ, Rahbar MH, Wang W, Restrepo MA, Zarate I, Mora-Guzman F, Crespo-Solis JG, Briggs J, McCormick JB, Fisher-Hoch SP. Cross-sectional assessment reveals high diabetes prevalence among newly-diagnosed tuberculosis cases. Bull World Health Organ. 2011;89(5):352–9. Baker MA, Harries AD, Jeon CY, Hart JE, Kapur A, Lonnroth K, Ottmani SE, Goonesekera SD, Murray MB. The impact of diabetes on tuberculosis treatment outcomes: a systematic review. BMC Med. 2011;9:81–7015-9-81. Faurholt-Jepsen D, Range N, PrayGod G, Jeremiah K, Faurholt-Jepsen M, Aabye MG, Changalucha J, Christensen DL, Grewal HM, Martinussen T, Krarup H, Witte DR, Andersen AB, Friis H. Diabetes is a strong predictor of mortality during tuberculosis treatment: a prospective cohort study among tuberculosis patients from Mwanza, Tanzania. Tropical Med Int Health. 2013;18(7):822–9. Horita N, Miyazawa N, Yoshiyama T, Sato T, Yamamoto M, Tomaru K, Masuda M, Tashiro K, Sasaki M, Morita S, Kaneko T, Ishigatsubo Y. Development and validation of a tuberculosis prognostic score for smear-positive in-patients in Japan. Int J Tuberc Lung Dis. 2013;17(1):54–60. Pefura-Yone EW, Balkissou AD, Poka-Mayap V, Fatime-Abaicho HK, Enono-Edende PT, Kengne AP. Development and validation of a prognostic score during tuberculosis treatment. BMC Infect Dis. 2017;17(1):251–017-2309-9. Nagai K, Horita N, Sato T, Yamamoto M, Nagakura H, Kaneko T. Age, dehydration, respiratory failure, orientation disturbance, and blood pressure score predicts in-hospital mortality in HIV-negative non-multidrug-resistant smear-positive pulmonary tuberculosis in Japan. Sci Rep. 2016;6:21610. Bastos HN, Osorio NS, Castro AG, Ramos A, Carvalho T, Meira L, Araujo D, Almeida L, Boaventura R, Fragata P, Chaves C, Costa P, Portela M, Ferreira I, Magalhaes SP, Rodrigues F, Sarmento-Castro R, Duarte R, Guimaraes JT, Saraiva M. A prediction rule to stratify mortality risk of patients with pulmonary tuberculosis. PLoS One. 2016;11(9):e0162797. Lui G, Wong RY, Li F, Lee MK, Lai RW, Li TC, Kam JK, Lee N. High mortality in adults hospitalized for active tuberculosis in a low HIV prevalence setting. PLoS One. 2014;9(3):e92077. Tuberculosis Case Definition for Public Health Surveillance, CDC Tuberculosis Surveillance Data Training, Report of Verified Case of Tuberculosis (RCT), June 2009. Available at https://www.cdc.gov/tb/programs/rvct/instructionmanual.pdf. Accessed 05 Jan 2018. Wasserman L. Bayesian model selection and model averaging. J Math Psychol. 2000;44(1):92–107. Dunson DB, Herring AH. Bayesian model selection and averaging in additive and proportional hazards models. Lifetime Data Anal. 2005;11(2):213–32. Nguyen DT, Graviss EA. Development and validation of a prognostic score to predict tuberculosis mortality. J Infect. 2018;77(4):283–90. Epub 2018 Apr 9. Nguyen DT, Jenkins HE, Graviss EA. Prognostic score to predict mortality during TB treatment in TB/HIV co-infected patients. PLoS One. 2018;13(4):e0196022. Rassi A, Rassi A, Little WC, Xavier S, Rassi S, Rassi AG, Rassi GG, Hasslocher-Moreno A, Sousa AS, Scanavacca MI. Development and validation of a risk score for predicting death in Chagas' heart disease. N Engl J Med. 2006;355(8):799–808. Yoon YS, Jung JW, Jeon EJ, Seo H, Ryu YJ, Yim JJ, Kim YH, Lee BH, Park YB, Lee BJ, Kang H, Choi JC. The effect of diabetes control status on treatment response in pulmonary tuberculosis: a prospective study. Thorax. 2017;72(3):263–70. Riza AL, Pearson F, Ugarte-Gil C, Alisjahbana B, van de Vijver S, Panduru NM, Hill PC, Ruslami R, Moore D, Aarnoutse R, Critchley JA, van Crevel R. Clinical management of concurrent diabetes and tuberculosis and the implications for patient services. Lancet Diabetes Endocrinol. 2014;2(9):740–53. Podlekareva DN, Panteleev AM, Grint D, Post FA, Miro JM, Bruyand M, Furrer H, Obel N, Girardi E, Vasilenko A, Losso MH, Arenas-Pinto A, Cayla J, Rakhmanova A, Zeltina I, Werlinrud AM, Lundgren JD, Mocroft A, Kirk O. HIV/TB study group: short- and long-term mortality and causes of death in HIV/tuberculosis patients in Europe. Eur Respir J. 2014;43(1):166–77. Panel on Antiretroviral Guidelines for Adults and Adolescents: Guidelines for the Use of Antiretroviral Agents in Adults and Adolescents Living with HIV. Department of Health and Human Services. Available at: http://www.aidsinfo.nih.gov/ContentFiles/AdultandAdolescentGL.pdf. Assessed on 03/01/2018. 2018. Cruz-Hervert LP, Garcia-Garcia L, Ferreyra-Reyes L, Bobadilla-del-Valle M, Cano-Arellano B, Canizales-Quintero S, Ferreira-Guerrero E, Baez-Saldana R, Tellez-Vazquez N, Nava-Mercado A, Juarez-Sandino L, Delgado-Sanchez G, Fuentes-Leyra CA, Montero-Campos R, Martinez-Gamboa RA, Small PM, Sifuentes-Osornio J, Ponce-de-Leon A. Tuberculosis in ageing: high rates, complex diagnosis and poor clinical outcomes. Age Ageing. 2012;41(4):488–95. Reis-Santos B, Gomes T, Horta BL, Maciel EL. The outcome of tuberculosis treatment in subjects with chronic kidney disease in Brazil: a multinomial analysis. J Bras Pneumol. 2013;39(5):585–94. Negin J, Abimbola S, Marais BJ. Tuberculosis among older adults--time to take notice. Int J Infect Dis. 2015;32:135–7. Lee HG, William T, Menon J, Ralph AP, Ooi EE, Hou Y, Sessions O, Yeo TW. Tuberculous meningitis is a major cause of mortality and morbidity in adults with central nervous system infections in Kota Kinabalu, Sabah, Malaysia: an observational study. BMC Infect Dis. 2016;16:296–016-1640-x. Jung RS, Bennion JR, Sorvillo F, Bellomy A. Trends in tuberculosis mortality in the United States, 1990-2006: a population-based case-control study. Public Health Rep. 2010;125(3):389–97. Magee MJ, Foote M, Maggio DM, Howards PP, Narayan KM, Blumberg HM, Ray SM, Kempker RR. Diabetes mellitus and risk of all-cause mortality among patients with tuberculosis in the state of Georgia, 2009-2012. Ann Epidemiol. 2014;24(5):369–75. Texas Department of State Health Services: Epidemiology and Supplemental Projects Group. Texas TB Surveillance Annual Report 2016. Austin, TX; 2017. Available at: https://www.dshs.texas.gov/IDCU/disease/tb/statistics/TBSurveillanceReport.pdf. Accessed on 05/04/2018. Centers for Disease Control and Prevention (CDC): Reported Tuberculosis in the United States, 2016. Available at: https://www.cdc.gov/tb/statistics/reports/2016/pdfs/2016_Surveillance_FullReport.pdf. Accessed on 03/10/2018. 2017. Abdelbary BE, Garcia-Viveros M, Ramirez-Oropesa H, Rahbar MH, Restrepo BI. Tuberculosis-diabetes epidemiology in the border and non-border regions of Tamaulipas, Mexico. Tuberculosis (Edinb). 2016;101S:S124–34. Nguyen DT, Graviss EA: Trends of diabetes and associated mortality in tuberculosis patients in Texas, a large population-based analysis. 5th Texas Tuberculosis Research Symposium, El Paso, United States, 2/9/18.[abstract]. The authors acknowledge the selfless work of public health officials and staff at the City of Houston Bureau of Tuberculosis Control, Houston Department of Health & Human Services, Harris County Public Health, TB Elimination Program, Texas Department of State Health Services and the US Centers for Disease Control that made the data available for use in this analysis. Data are available on CDC TBGIMS (https://www.cdc.gov/tb/programs/genotyping/tbgims/) to standard users. Department of Pathology and Genomic Medicine, Houston Methodist Research Institute, Mail Station: R6-414, 6670 Bertner Ave, Houston, TX, 77030, USA Duc T. Nguyen & Edward A. Graviss Search for Duc T. Nguyen in: Search for Edward A. Graviss in: Concept of study (DTN, EAG), study design (DTN, EAG), acquisition of data (DTN, EAG), data analysis (DTN), and writing/revising the manuscript (DTN, EAG). Both authors read and approved the final manuscript. Correspondence to Edward A. Graviss. As this analysis used the retrospective de-identified surveillance data, ethics approval and consent to participate were not required. Table S1. Demographic and clinical characteristics of the study population compared with those excluded from the analyses. (DOCX 22 kb) Risk score TB-diabetes TB-DM
CommonCrawl
math.OC math.AC math.FA 4 blog links Mathematics > Optimization and Control Title:Resource convertibility and ordered commutative monoids Authors:Tobias Fritz (Submitted on 14 Apr 2015 (v1), last revised 2 Jul 2015 (this version, v2)) Abstract: Resources and their use and consumption form a central part of our life. Many branches of science and engineering are concerned with the question of which given resource objects can be converted into which target resource objects. For example, information theory studies the conversion of a noisy communication channel instance into an exchange of information. Inspired by work in quantum information theory, we develop a general mathematical toolbox for this type of question. The convertibility of resources into other ones and the possibility of combining resources is accurately captured by the mathematics of ordered commutative monoids. As an intuitive example, we consider chemistry, where chemical reaction equations such as \[ \mathrm{2H_2 + O_2} \to \mathrm{2H_2O} \] are concerned both with a convertibility relation "$\to$" and a combination operation "$+$". We study ordered commutative monoids from an algebraic and functional-analytic perspective and derive a wealth of results which should have applications to concrete resource theories, such as a formula for rates of conversion. As a running example showing that ordered commutative monoids are also of purely mathematical interest, we exemplify our results with the ordered commutative monoid of graphs. While closely related to both Girard's linear logic and to Deutsch's constructor theory, our framework also produces results very reminiscent of the utility theorem of von Neumann and Morgenstern in decision theory and of a theorem of Lieb and Yngvason on thermodynamics. Concerning pure algebra, our observation is that some pieces of algebra can be developed in a context in which equality is not necessarily symmetric, i.e. in which the equality relation is replaced by an ordering relation. For example, notions like cancellativity or torsion-freeness are still sensible and very natural concepts in our ordered setting. Comments: 63 pages. v2: revised exposition, new title Subjects: Optimization and Control (math.OC); Commutative Algebra (math.AC); Combinatorics (math.CO); Functional Analysis (math.FA); Quantum Physics (quant-ph) MSC classes: Primary: 90B99, 06F05, Secondary: 05C60, 92E20, 18D20, 46B40 Journal reference: Math. Struct. Comp. Sci. 27(6), 850--938 (2017) DOI: 10.1017/S0960129515000444 Cite as: arXiv:1504.03661 [math.OC] (or arXiv:1504.03661v2 [math.OC] for this version) From: Tobias Fritz [view email] [v1] Tue, 14 Apr 2015 19:09:11 UTC (88 KB) [v2] Thu, 2 Jul 2015 02:19:48 UTC (86 KB)
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Gravity bends space, so how can space have only 3 dimensions? [duplicate] Better explanation of the common general relativity illustration (stretched sheet of fabric) (5 answers) Due to the laws of gravity and electromagnetic attraction (decreases with $\frac{1}{r^2}$) we know that space should be limited to 3 dimensions. At the same time we know that gravity bends space. All visualization of that are basically elastic 2D membranes where heavy balls are placed to create "gravity wells" of various sizes. The watcher is left to imagine all that happening with one more dimension and then you've got the way how gravity bends space. However, the membrane visualization works by distorting the 2D plane into the 3rd dimension. If I go 1 dimension higher, logically 3D space should be distorted into the 4th dimension. Basically, gravity distorts space into a dimension that shouldn't exist according to the laws of gravity? Question: where does this logical chain of arguments have a hole? (Assume I have a lot of popular science knowledge and a background in Mathematics and computer science, but no physics) general-relativity gravity spacetime curvature spacetime-dimensions Qmechanic♦ subrunnersubrunner $\begingroup$ Mass and energy bend spacetime, not space. $\endgroup$ – Not_Einstein Sep 6 '20 at 12:22 $\begingroup$ You can consider a space that is bent, without embedding it into a higher dimensional space (as is done with the membrane). You can define "bentness" as an intrinsic property making no reference to the embedding (just by looking at the "straightest" lines in the bent space). $\endgroup$ – Sebastian Riese Sep 6 '20 at 12:24 $\begingroup$ Mandatory xkcd. $\endgroup$ – Philip Sep 6 '20 at 12:37 $\begingroup$ Possible duplicates: physics.stackexchange.com/a/13839/2451 , physics.stackexchange.com/q/90592/2451 and links therein. $\endgroup$ – Qmechanic♦ Sep 6 '20 at 13:23 $\begingroup$ New Veritasium video: Why Gravity is NOT a Force $\endgroup$ – mmesser314 Oct 18 '20 at 2:29 What you're asking is actually a very important topic in general relativity. One of the most important features of the mathematical surfaces used in general relativity is that their curvature can be defined without making reference to an "external" space into which we have to distort the surface. In the popular visual presentation of general relativity we take a 2-dimensional surface and we "press" into it to cause it to bend. This seems to require a 3rd dimension in which to press the surface. However, one of the most important discoveries in the study of curved surfaces is "Gauss' theorema egregium", which essentially states that we can fully describe the curvature of a surface without needing to make any reference to a surrounding, higher-dimensional space in which it is embedded. This ability to describe curvature without referencing the surrounding space is called intrinsic curvature (as opposed to extrinsic) if you want the fancy terminology. It also makes sense that we be able to describe our universe without needing to make reference to an "external space" around the universe into which it curves. Just as an additional note: spacetime, which is the surface on which general relativity is formulated is a four-dimensional surface, and so the presence of mass and energy curves spacetime rather than just space alone. Unfortunately there is no way to nicely draw 4-dimensional space like you can with 3-dimensional space. CharlieCharlie $\begingroup$ Huh. I think I'm now even more confused. If time was an actual dimension, wouldn't we already have a paradoxon of 4D when the laws of gravity say we only have 3? And the intrinsic curvature- I get that it is possible to describe it without referencing the surrounding space. But nobody denies that it is embedded in a higher dimension, we just don't know how? Or are there ways to have a plane with non-zero Gaussian curvature? $\endgroup$ – subrunner Sep 6 '20 at 13:46 $\begingroup$ @subrunner The "laws of gravity" do not say we only have 3 dimensions. In Newtonian gravity (an old model of gravity) we treat objects as moving through 3-dimensional space, in general relativity (the new model) we treat the universe as a 4-dimensional surface with time becoming "just another dimension" of the the surface. $\endgroup$ – Charlie Sep 6 '20 at 14:03 $\begingroup$ @subrunner Also just regarding what you've said about embedding, there is no reason to believe the universe is embedded in a higher dimensional space (in-fact, a lot of people would argue it is rather uneconomical to do so, since as Gauss proved it is possible to completely describe the curvature of the surface without referencing the surrounding space). $\endgroup$ – Charlie Sep 6 '20 at 14:04 The membrane analogy works great, but only if you don't think about it too much. The only reason it works (balls orbit around the depression) is because there is a real gravitational field that no one ever mentions, the one on the planet where the experiment is being done. It wouldn't work at all in free-fall in space. Instead of thinking of it as bending, it would be better to think of it as distorting. Consider the part of a balloon opposite the open end. The rubber there tends to be tougher than elsewhere, even when the balloon is inflated, so much so that one can insert a pin through it without causing it to pop. The analogy is that just as there is more rubber concentrated in that one small area, there is much more space concentrated near large masses. Measuring the distance across one of these concentrated areas produces a value that is larger than what d = c÷𝜋 would predict. No extra dimension is needed. Ray ButterworthRay Butterworth $\begingroup$ I would say it is completely misleading. The sheer number of questions here and elsewhere is its own proof. You can prove me wrong by showing a calculation based on the rubber sheet. $\endgroup$ – m4r35n357 Sep 6 '20 at 13:32 Space is curved if you don't come back to your starting point when you walk around a square. Or equivalently, you wind up at different points if you walk east-then-north vs north-then-east. The surface of the Earth is curved in this sense. It doesn't show for a small square. But try a really large square. Start on the equator. Walk 1/4 of the way around the world to the east. Turn left and walk 1/4 of the way around the world to the north. You are at the north pole. Walk 1/4 of the way around the world to the north. Turn right, and walk 1/4 of the way around the world. (OK, it isn't east because coordinates are weird at the north pole.) But you are on the equator. In GR, a mass causes distortions of distance and time. If you are in orbit, the distance to the center of a star is deeper than you would expect from dividing the circumference traced out by the orbit by $2\pi$. Time runs slower at the surface than in orbit. Space-time is 4 dimensional, so you get an extra direction you can walk around the block. You can also wait a while. So trace out this "square" where one side is distance, and the other time. Start at a point above the star. Have a person at the top wait a bit, then find the point/time a distance X below him at that time. Find the point/time a distance X below the top person right now. Have someone at that bottom point wait a bit. Time is slower at the bottom. In his travel through time, the bottom person passes through the the point/time the top person picks out. But when he does, he isn't done waiting yet. mmesser314mmesser314 Not the answer you're looking for? Browse other questions tagged general-relativity gravity spacetime curvature spacetime-dimensions or ask your own question. Better explanation of the common general relativity illustration (stretched sheet of fabric) Misused physics analogies Why do we still need to think of gravity as a force? How can space be euclidean when light bends? How do we know the laws of physics remain the same in different dimensions?
CommonCrawl
Publications of the Astronomical Society of Australia (8) The Journal of Agricultural Science (1) WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022 Published online by Cambridge University Press: 15 November 2022, e058 We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey. GASKAP-HI pilot survey science I: ASKAP zoom observations of Hi emission in the Small Magellanic Cloud N. M. Pingel, J. Dempsey, N. M. McClure-Griffiths, J. M. Dickey, K. E. Jameson, H. Arce, G. Anglada, J. Bland-Hawthorn, S. L. Breen, F. Buckland-Willis, S. E. Clark, J. R. Dawson, H. Dénes, E. M. Di Teodoro, B.-Q. For, Tyler J. Foster, J. F. Gómez, H. Imai, G. Joncas, C.-G. Kim, M.-Y. Lee, C. Lynn, D. Leahy, Y. K. Ma, A. Marchal, D. McConnell, M.-A. Miville-Deschènes, V. A. Moss, C. E. Murray, D. Nidever, J. Peek, S. Stanimirović, L. Staveley-Smith, T. Tepper-Garcia, C. D. Tremblay, L. Uscanga, J. Th. van Loon, E. Vázquez-Semadeni, J. R. Allison, C. S. Anderson, Lewis Ball, M. Bell, D. C.-J. Bock, J. Bunton, F. R. Cooray, T. Cornwell, B. S. Koribalski, N. Gupta, D. B. Hayman, L. Harvey-Smith, K. Lee-Waddell, A. Ng, C. J. Phillips, M. Voronkov, T. Westmeier, M. T. Whiting Published online by Cambridge University Press: 07 February 2022, e005 We present the most sensitive and detailed view of the neutral hydrogen ( ${\rm H\small I}$ ) emission associated with the Small Magellanic Cloud (SMC), through the combination of data from the Australian Square Kilometre Array Pathfinder (ASKAP) and Parkes (Murriyang), as part of the Galactic Australian Square Kilometre Array Pathfinder (GASKAP) pilot survey. These GASKAP-HI pilot observations, for the first time, reveal ${\rm H\small I}$ in the SMC on similar physical scales as other important tracers of the interstellar medium, such as molecular gas and dust. The resultant image cube possesses an rms noise level of 1.1 K ( $1.6\,\mathrm{mJy\ beam}^{-1}$ ) $\mathrm{per}\ 0.98\,\mathrm{km\ s}^{-1}$ spectral channel with an angular resolution of $30^{\prime\prime}$ ( ${\sim}10\,\mathrm{pc}$ ). We discuss the calibration scheme and the custom imaging pipeline that utilises a joint deconvolution approach, efficiently distributed across a computing cluster, to accurately recover the emission extending across the entire ${\sim}25\,\mathrm{deg}^2$ field-of-view. We provide an overview of the data products and characterise several aspects including the noise properties as a function of angular resolution and the represented spatial scales by deriving the global transfer function over the full spectral range. A preliminary spatial power spectrum analysis on individual spectral channels reveals that the power law nature of the density distribution extends down to scales of 10 pc. We highlight the scientific potential of these data by comparing the properties of an outflowing high-velocity cloud with previous ASKAP+Parkes ${\rm H\small I}$ test observations. The ASKAP Variables and Slow Transients (VAST) Pilot Survey Australian SKA Pathfinder Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov Published online by Cambridge University Press: 12 October 2021, e054 The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified. Australian square kilometre array pathfinder: I. system description A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier Published online by Cambridge University Press: 05 March 2021, e009 In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown. The Rapid ASKAP Continuum Survey I: Design and first results D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz. The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson Published online by Cambridge University Press: 09 September 2016, e042 We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope. Assessing climate risks in rainfed farming using farmer experience, crop calendars and climate analysis U. B. NIDUMOLU, P. T. HAYMAN, Z. HOCHMAN, H. HORAN, D. R. REDDY, G. SREENIVAS, D. M. KADIYALA Journal: The Journal of Agricultural Science / Volume 153 / Issue 8 / November 2015 Published online by Cambridge University Press: 29 April 2015, pp. 1380-1393 Climate risk assessment in cropping is generally undertaken in a top-down approach using climate records while critical farmer experience is often not accounted for. In the present study, set in south India, farmer experience of climate risk is integrated in a bottom-up participatory approach with climate data analysis. Crop calendars are used as a boundary object to identify and rank climate and weather risks faced by smallhold farmers. A semi-structured survey was conducted with experienced farmers whose income is predominantly from farming. Interviews were based on a crop calendar to indicate the timing of key weather and climate risks. The simple definition of risk as consequence × likelihood was used to establish the impact on yield as consequence and chance of occurrence in a 10-year period as likelihood. Farmers' risk experience matches well with climate records and risk analysis. Farmers' rankings of 'good' and 'poor' seasons also matched up well with their independently reported yield data. On average, a 'good' season yield was 1·5–1·65 times higher than a 'poor' season. The main risks for paddy rice were excess rains at harvesting and flowering and deficit rains at transplanting. For cotton, farmers identified excess rain at harvest, delayed rains at sowing and excess rain at flowering stages as events that impacted crop yield and quality. The risk assessment elicited from farmers complements climate analysis and provides some indication of thresholds for studies on climate change and seasonal forecasts. The methods and analysis presented in the present study provide an experiential bottom-up perspective and a methodology on farming in a risky rainfed climate. The methods developed in the present study provide a model for end-user engagement by meteorological agencies that strive to better target their climate information delivery. The Australian Square Kilometre Array Pathfinder: System Architecture and Specifications of the Boolardy Engineering Test Array A. W. Hotan, J. D. Bunton, L. Harvey-Smith, B. Humphreys, B. D. Jeffs, T. Shimwell, J. Tuthill, M. Voronkov, G. Allen, S. Amy, K. Ardern, P. Axtens, L. Ball, K. Bannister, S. Barker, T. Bateman, R. Beresford, D. Bock, R. Bolton, M. Bowen, B. Boyle, R. Braun, S. Broadhurst, D. Brodrick, K. Brooks, M. Brothers, A. Brown, C. Cantrall, G. Carrad, J. Chapman, W. Cheng, A. Chippendale, Y. Chung, F. Cooray, T. Cornwell, E. Davis, L. de Souza, D. DeBoer, P. Diamond, P. Edwards, R. Ekers, I. Feain, D. Ferris, R. Forsyth, R. Gough, A. Grancea, N. Gupta, J. C. Guzman, G. Hampson, C. Haskins, S. Hay, D. Hayman, S. Hoyle, C. Jacka, C. Jackson, S. Jackson, K. Jeganathan, S. Johnston, J. Joseph, R. Kendall, M. Kesteven, D. Kiraly, B. Koribalski, M. Leach, E. Lenc, E. Lensson, L. Li, S. Mackay, A. Macleod, T. Maher, M. Marquarding, N. McClure-Griffiths, D. McConnell, S. Mickle, P. Mirtschin, R. Norris, S. Neuhold, A. Ng, J. O'Sullivan, J. Pathikulangara, S. Pearce, C. Phillips, R. Y. Qiao, J. E. Reynolds, A. Rispler, P. Roberts, D. Roxby, A. Schinckel, R. Shaw, M. Shields, M. Storey, T. Sweetnam, E. Troup, B. Turner, A. Tzioumis, T. Westmeier, M. Whiting, C. Wilson, T. Wilson, K. Wormnes, X. Wu This paper describes the system architecture of a newly constructed radio telescope – the Boolardy engineering test array, which is a prototype of the Australian square kilometre array pathfinder telescope. Phased array feed technology is used to form multiple simultaneous beams per antenna, providing astronomers with unprecedented survey speed. The test array described here is a six-antenna interferometer, fitted with prototype signal processing hardware capable of forming at least nine dual-polarisation beams simultaneously, allowing several square degrees to be imaged in a single pointed observation. The main purpose of the test array is to develop beamforming and wide-field calibration methods for use with the full telescope, but it will also be capable of limited early science demonstrations. By Lenard A. Adler, Pinky Agarwal, Rehan Ahmed, Jagga Rao Alluri, Fawaz Al-Mufti, Samuel Alperin, Michael Amoashiy, Michael Andary, David J. Anschel, Padmaja Aradhya, Vandana Aspen, Esther Baldinger, Jee Bang, George D. Baquis, John J. Barry, Jason J. S. Barton, Julius Bazan, Amanda R. Bedford, Marlene Behrmann, Lourdes Bello-Espinosa, Ajay Berdia, Alan R. Berger, Mark Beyer, Don C. Bienfang, Kevin M. Biglan, Thomas M. Boes, Paul W. Brazis, Jonathan L. Brisman, Jeffrey A. Brown, Scott E. Brown, Ryan R. Byrne, Rina Caprarella, Casey A. Chamberlain, Wan-Tsu W. Chang, Grace M. Charles, Jasvinder Chawla, David Clark, Todd J. Cohen, Joe Colombo, Howard Crystal, Vladimir Dadashev, Sarita B. Dave, Jean Robert Desrouleaux, Richard L. Doty, Robert Duarte, Jeffrey S. Durmer, Christyn M. Edmundson, Eric R. Eggenberger, Steven Ender, Noam Epstein, Alberto J. Espay, Alan B. Ettinger, Niloofar (Nelly) Faghani, Amtul Farheen, Edward Firouztale, Rod Foroozan, Anne L. Foundas, David Elliot Friedman, Deborah I. Friedman, Steven J. Frucht, Oded Gerber, Tal Gilboa, Martin Gizzi, Teneille G. Gofton, Louis J. Goodrich, Malcolm H. Gottesman, Varda Gross-Tsur, Deepak Grover, David A. Gudis, John J. Halperin, Maxim D. Hammer, Andrew R. Harrison, L. Anne Hayman, Galen V. Henderson, Steven Herskovitz, Caitlin Hoffman, Laryssa A. Huryn, Andres M. Kanner, Gary P. Kaplan, Bashar Katirji, Kenneth R. Kaufman, Annie Killoran, Nina Kirz, Gad E. Klein, Danielle G. Koby, Christopher P. Kogut, W. Curt LaFrance, Patrick J.M. Lavin, Susan W. Law, James L. Levenson, Richard B. Lipton, Glenn Lopate, Daniel J. Luciano, Reema Maindiratta, Robert M. Mallery, Georgios Manousakis, Alan Mazurek, Luis J. Mejico, Dragana Micic, Ali Mokhtarzadeh, Walter J. Molofsky, Heather E. Moss, Mark L. Moster, Manpreet Multani, Siddhartha Nadkarni, George C. Newman, Rolla Nuoman, Paul A. Nyquist, Gaia Donata Oggioni, Odi Oguh, Denis Ostrovskiy, Kristina Y. Pao, Juwen Park, Anastas F. Pass, Victoria S. Pelak, Jeffrey Peterson, John Pile-Spellman, Misha L. Pless, Gregory M. Pontone, Aparna M. Prabhu, Michael T. Pulley, Philip Ragone, Prajwal Rajappa, Venkat Ramani, Sindhu Ramchandren, Ritesh A. Ramdhani, Ramses Ribot, Heidi D. Riney, Diana Rojas-Soto, Michael Ronthal, Daniel M. Rosenbaum, David B. Rosenfield, Durga Roy, Michael J. Ruckenstein, Max C. Rudansky, Eva Sahay, Friedhelm Sandbrink, Jade S. Schiffman, Angela Scicutella, Maroun T. Semaan, Robert C. Sergott, Aashit K. Shah, David M. Shaw, Amit M. Shelat, Claire A. Sheldon, Anant M. Shenoy, Yelizaveta Sher, Jessica A. Shields, Tanya Simuni, Rajpaul Singh, Eric E. Smouha, David Solomon, Mehri Songhorian, Steven A. Sparr, Egilius L. H. Spierings, Eve G. Spratt, Beth Stein, S.H. Subramony, Rosa Ana Tang, Cara Tannenbaum, Hakan Tekeli, Amanda J. Thompson, Michael J. Thorpy, Matthew J. Thurtell, Pedro J. Torrico, Ira M. Turner, Scott Uretsky, Ruth H. Walker, Deborah M. Weisbrot, Michael A. Williams, Jacques Winter, Randall J. Wright, Jay Elliot Yasen, Shicong Ye, G. Bryan Young, Huiying Yu, Ryan J. Zehnder Edited by Alan B. Ettinger, Albert Einstein College of Medicine, New York, Deborah M. Weisbrot, State University of New York, Stony Brook Book: Neurologic Differential Diagnosis Print publication: 17 April 2014, pp xi-xx Measuring Noise Temperatures of Phased-Array Antennas for Astronomy at CSIRO A. P. Chippendale, D. B. Hayman, S. G. Hay Published online by Cambridge University Press: 01 April 2014, e019 We describe the development of a noise-temperature testing capability for phased-array antennas operating in receive mode from 0.7 GHz to 1.8 GHz. Sampled voltages from each array port were recorded digitally as the zenith-pointing array under test was presented with three scenes: (1) a large microwave absorber at ambient temperature, (2) the unobstructed radio sky, and (3) broadband noise transmitted from a reference antenna centred over and pointed at the array under test. The recorded voltages were processed in software to calculate the beam equivalent noise temperature for a maximum signal-to-noise ratio beam steered at the zenith. We introduced the reference-antenna measurement to make noise measurements with reproducible, well-defined beams directed at the zenith and thereby at the centre of the absorber target. We applied a detailed model of cosmic and atmospheric contributions to the radio sky emission that we used as a noise-temperature reference. We also present a comprehensive analysis of measurement uncertainty including random and systematic effects. The key systematic effect was due to uncertainty in the beamformed antenna pattern and how efficiently it illuminates the absorber load. We achieved a combined uncertainty as low as 4 K for a 40 K measurement of beam equivalent noise temperature. The measurement and analysis techniques described in this paper were pursued to support noise-performance verification of prototype phased-array feeds for the Australian Square Kilometre Array Pathfinder telescope.
CommonCrawl
BMC Chemistry Green and simple production of graphite intercalation compound used sodium bicarbonate as intercalation agent Xin Wang1, Guogang Wang2 & Long Zhang3 BMC Chemistry volume 16, Article number: 13 (2022) Cite this article In view of the technical difficulties in the preparation of graphite intercalation compound (GIC) such as complex processes, the need to use strong acid reagents, and the product containing corrosive elements. A novel, efficient and simple method used sodium bicarbonate as intercalation agent was developed, which combined with mechanical force and chemical method for the green production of GIC. The production parameters were optimized by the single factor experiments, the optimal conditions were the ball mill speed 500 r/min for 4 h (6 mm diameter of the stainless-steel beads as ball milling media), the decomposition temperature 200 ℃ for 4 h, and 1:1 mass ratio of flake graphite to sodium bicarbonate. SEM results revealed that the prepared product appears the lamellar separation, pores, and semi-open morphology characteristic of GIC. FT-IR results indicated that the preparation method does not change the carbon-based structure, and the sodium bicarbonate intercalant has entered the interlayer of graphite flakes to form GIC. XRD results further showed that the GIC products still maintained the structure of carbon atoms or molecules, and the sodium bicarbonate intercalation agent has entered the interlayer of the graphite, and increased the interlayer distance of the layered graphite. The expandability of GIC products was studied, and the results show that it was expandable, and the expandable volume of GIC products prepared under optimal conditions has reached 142 mL/g. The theoretical basis for large-scale production was provided by studied the mechanism of the preparation method and designed the flow chart. The method has the advantages of simple process, products free of impurities, no use of aggressive reagents, process stable, and does not pollute the environment, being favorable to mass production, and provided new preparation method and idea for two-dimensional nanomaterials with preparation technical difficulties. Carbon is abundantly distributed on the earth, and it can constitute many carbon materials with special properties. Graphite is an allotrope of carbon, it has excellent properties such as corrosion resistance, good heat resistance, and stable chemical properties [1]. The various excellent properties make it have broad application prospects in many fields [2]. In recent years, it was found that graphite intercalation compound (GIC) can be obtained by appropriate treatment of graphite. GIC maintains the planar hexagonal layered structure, and at the same time, the intercalation material interacts with the carbon layer, which changes some structural parameters between the layers and the layers. Therefore, GIC maintaining the excellent properties of graphite, such as high conductivity, light weight, and high specific surface area. At the same time, GIC also shows many special properties such as resistance to corrosion and oxidation, resistance to high and low temperatures, and so on [3]. Studies have shown that expandability is one of the important indicators of GIC products in practical applications. Expandable GIC can quickly decompose and generate a large amount of gas at suitable temperature, which makes graphite expanded dozens or even hundreds of times along the C axis, making it have important industrial value and industrial application prospects [4]. So far, the most popular methods for the preparation of GIC include chemical oxidation [5], electrochemical oxidation [6], vapor diffusion method [7] and ultrasonic oxidation [8]. The above preparation methods are often use aggressive reagents and restricted by the relatively high energy consumption, complex operation, environmental pollution, sometimes lower yield and poor product quality. Liquid phase method has been extensively studied because it is easy to operates and can obtain higher quality product [9]. However, the use of excessive organic solvents often leads to product instability, environmental pollution and increases production expense. Therefore, it is necessary to develop a novel greener production method to resolve the problems mentioned above. To achieve these goals, we design a new method for the simple and green production of GIC from flake graphite. The effect of the production parameters (such as ball milling media, ball milling media size, ball milling time, ball mill speed, decomposition temperature, decomposition time and mass ratio of flake graphite to sodium bicarbonate) were investigated systematically. At the same time, the expandability of GIC under different production parameters was also studied. The morphology and structure of the obtained GIC samples were characterized and confirmed by SEM, XRD, and FT-IR, and the reaction mechanism was obtained. The process flow was also designed. This work has academic and industrial reference value for the preparation of GIC. Materials and instruments The flake graphite (0.5 mm) and the sodium bicarbonate (AR) were purchased from Sinopharm Chemical reagent Co. (Shanghai, China). Ball mills (QM-3SP04, YXQM-2 L, KEQ-2 L and QM3SP2) were purchased from Tianchuang Powder Technology Co. (Changsha, China), Miqi Instrument Equipment Co. (Changsha, China), Ru Rui Technology Co. (Guangzhou, China) and Ru Rui Technology Co. (Guangzhou, China), respectively. The analytical balance (TG328A) was purchased from Balance instrument factory (Shanghai, China). The pumping equipment was purchased from Guohua Electric Co. (Shanghai, China). The vacuum drying oven was purchased from Anteing Electronic Instrument Factory (Shanghai, China). The muffle furnace (TDL-1800 A) was purchased from Keda Instrument Co (Nanyang, China). Production procedures The flake graphite powder and sodium bicarbonate (NaHCO3) solid were mixture and loaded into the reaction tank containing steel balls according to the experimental design. The ball mill was started after adjust the suitable rotating speed. The GIC was obtained after the designed ball milling time, and then take out the mixture and put it into the muffle furnace. The temperature of the muffle furnace was adjusted from 150 to 300 °C to be suitable for the decomposition of NaHCO3. After the designed reaction time, the mixture was cooled, washed and dried, then the expandable GIC was obtained. Expanded graphite was obtained by high temperature expansion of expandable GIC at 950 ℃. We also investigated the effects of different preparation parameters on the quality of expandable GIC products. Eight process factors (such as ball milling media, ball milling media size, ball milling time, ball mill speed, decomposition temperature, decomposition time, mass ratio of flake graphite to NaHCO3 and ball mill model) were designed and adjusted in the production process. Morphological elucidation Morphological information of samples was obtained by SU8020 Hitachi scanning electron microscopy (Tokyo, Japan). Structural investigation The molecular structure of the GIC product obtained was identified by X-ray diffraction. The samples were scanned and recorded using the X-ray diffractometer (Rigaku, Japan) with an X-ray generator from 15 to 60 of 2θ (Braff angle), using Cu/Ka irradiation at 55 mA and 60 kV. The structure information of product was obtained by FT-IR (IS50). The wave number range scanned was 4000−400 cm-1. After washed and dried, the powders and KBr were compacted into disks and analyzed. Determination of expansion volume Expansion volume refers to the volume (unit mass) of GIC after expansion at a certain temperature, the unit is mL/g. Determine the expansion volume according to the national standard GB10698-89, the specific determine steps are as follows [10]: Firstly, a certain amount of the samples prepared according to the experimental method described in 2.2 was weighed by an analytical balance, and a quartz beaker (with scale) was put into the muffle furnace (adjustable temperature 100−1500 °C) that has been heated to 950 °C to preheat for 5 min, then add the sample into the quartz beaker, do not close the furnace door, and take it out immediately as long as it no longer expands. Read the average value of the highest and the lowest point on the top surface of the sample after expansion as the expanded volume of the sample (V). The expanded volume Z is calculated using the following formula: $${Z}=\frac{{V}}{{m}}$$ V- Volume of sample after expansion (mL), m- Mass of the sample (g). Two parallel tests were performed for each measurement, and the allowable error of the results conformed the requirements of the GB10698-89 standard. Optimization of process parameters Effect of ball milling time Figure 1 shows the XRD patterns of GIC products obtained from different ball milling times (2-10 h), and other experimental conditions were set as follows: ball mill speed was 500 r/min, the decomposition temperature was 150 ℃, the decomposition time was 2 h, the mass ratio of flake graphite to \(\text{NaHC}{\text{O}}_{\text{3}}\) was 1:1. It can be seen from Fig. 1 that the samples prepared under different ball milling times all have the characteristic absorption peaks of GIC. Figure 1 shows when the time was extended, the intensity of the characteristic peak of GIC was decreased firstly and then increased, and reached the minimum at 4 h. According to literature reports, in the XRD analysis of GIC, the weaker intensity of the characteristic splitting peak and the larger peak width indicated the better intercalation effect. Generally speaking, ball milling is more sufficient as the ball milling time increases, and the mixing of graphite and intercalation agent are more uniform under the action of mechanical force, which leads to the decreased of the characteristic absorption peak. However, when the ball milling time is too long, the restacking of graphite is more pronounced, which is not conducive to the preparation of GIC. This resulted the increased of the intensity of the characteristic peak. Thus, the XRD characterization results shown that the intercalation effect was the best when ball milled for 4 h. Studies have shown that expandability is one of the important indicators of GIC products in practical applications. Therefore, the thermal expansion performance of GIC products obtained under different ball milling time has also been studied. And in the next single factor experiments, the expansion volume was used as the index to optimized the production parameters. Figure 2 shows the expansion volumes of GIC products obtained under different ball milling times. It can be seen that the expansion volume of GIC was increased firstly and then decreased when the ball milling time was extended, and reached the maximum at 4 h. Thus, the appropriate ball milling time was 4 h, which was adopted by the subsequent experiments run. XRD patterns of GIC products after thermal treatment obtained by different ball milling times Expansion volumes of GIC products after thermal treatment obtained by different ball milling times Effect of decomposition time The effects of decomposition times on the production process were set as follows: ball milling time was 4 h, ball mill speed was 500 r/min, the decomposition temperature was 150 ℃, the mass ratio of flake graphite to NaHCO3 as 1:1 and decomposition times ranging between 1 and 15 h. Figure 3 shows the expansion volumes of GIC products obtained under different decomposition times. It shows when the decomposition time was extended, the expansion volume of GIC was increased firstly and then basically unchanged, and reached the maximum at 4 h. At the beginning, more carbon dioxide was produced by the decomposition of the intercalant NaHCO3 with the increase of the decomposition time, which can effectively increase the distance between the graphite flakes, lead to the expansion volume of GIC increased. However, when the decomposition time was prolonged, the intercalant NaHCO3 decomposed completed, and the decomposition product Na2CO3 cannot continue to decomposed (the decomposition temperature of Na2CO3 is above 850 ℃). This leads to the basically unchanged of the expansion effect. As it can be inferred from the results, 4 h decomposition time was found to be suitable for the investigation. Expansion volumes of GIC products after thermal treatment obtained from different decomposition times Effect of decomposition temperature The decomposition temperature is a key parameter, it directly affects the generated rate of gas obtained by the intercalation agent decomposed, which further affects the expansion effect [11]. Figure 4 shows the effect of the decomposition temperature on the production of GIC at the ball mill speed was 500 r/min, the mass ratio of flake graphite to NaHCO3 was 1:1 and the decomposition temperature was changed from 150−300 ℃. Figure 4 shows the expansion volumes of GIC products obtained under different decomposition temperature was increased firstly and then decreased, and reached the maximum at 200 ℃. This is because higher decomposition temperature resulted in better decomposition effect, which leads to an increase in the expansion volume of GIC. NaHCO3 solid starts to decompose at 50 ℃, and decomposes completely when the temperature reaches about 200 ℃. Therefore, when the decomposition temperature is too high, the decomposition rate is too fast, resulting in the carbon dioxide being lost without increasing the distance between the graphite layers, and the expansion effect is not good. From the results, a suitable decomposition temperature is 200 ℃. Expansion volumes of GIC products after thermal treatment obtained from different decomposition temperatures Effect of ball mill speed The effect of the ball mill speed on the preparation of GIC were performed from 300 r/min to 600 r/min, and the mass ratio of flake graphite to NaHCO3 was 1:1. Figure 5 shows the expansion volumes of GIC products obtained under different ball mill speed was increased firstly and then decreased, and reached the maximum at 500 r/min. This is because the more uniform mixing of graphite and intercalant under the influence of mechanical force when a higher ball mill speed is used. Meanwhile, the more restack of the graphene and uneven dispersion when the ball mill speed is too fast, results in the decline of the expansion volume of GIC. Therefore, it is reasonable to expect that a suitable ball mill speed may exist. In order to ensure the better preparation process, 500 r/min was selected as the optimal ball mill speed. Expansion volumes of GIC products after thermal treatment obtained from different ball mill speeds Effect of mass ratio of graphite to intercalant Figure 6 shows the expansion volumes of GIC products obtained under different mass ratio of flake graphite to NaHCO3 was increased firstly and then decreased, and reached the maximum at 1:1. The mass ratio of flake graphite to NaHCO3 was adjusted from 1:0.5 to 1:2. This is because as the amount of NaHCO3 increases, it is beneficial to produce more carbon dioxide during decomposition and increase the distance between graphite layers, which leads to an increase of the expansion volume of GIC. But when the amount of NaHCO3 is too large, under the action of mechanical force, except for a small part mixed with graphite, most of the NaHCO3 is wrapped outside the graphite, and decomposes rapidly during thermal decomposition, resulting in poor intercalation effect, which leads to decrease in the expansion volume of GIC. From the results, a suitable mass ratio of flake graphite to NaHCO3 is 1:1. Expansion volumes of GIC products after thermal treatment obtained from different mass ratio of flake graphite to NaHCO3 Effect of ball milling media The ball milling media have an impact on the production process, because different ball milling media have different squeezing force, impact force, shear force and internal sliding of the ball milling media on the ball milling process. Under the above process parameters, different ball milling media were used for the experiment. The specific experiment were as follows: zirconia ceramic beads, stainless-steel beads and cemented carbide beads were used as ball milling media (the diameter is 8 mm, and the number of ball milling media is 10). Table 1 shows the expansion volumes of GIC products obtained under different ball milling media. It can be seen that the expansion volume of the GIC obtained by zirconia ceramic beads as the ball milling medium was the smallest. This is due to the small specific gravity of the ceramic beads themselves, and the impact force, extrusion force and shear force on the ball milling material were small, and the ball milling efficiency was low, resulted in uneven mixing of graphite and NaHCO3. The ball milling effect of stainless-steel beads and cemented carbide beads were relatively good. This is because their own specific gravity was relatively large, and the kinetic energy generated by the drive of the ball mill was large, and the extrusion force, impact force and shear force of the ball mill material were larger. In addition, since the internal sliding of stainless-steel beads is greater than that of cemented carbide beads, which leads to the better grinding effect. Therefore, stainless-steel beads were used as the ball milling media. Table 1 Expansion volumes of GIC products after thermal treatment obtained under different ball milling medias Effect of ball mill media size The size of the ball milling media directly affects the grinding effect through the impact force, extrusion force and grinding effect on the material during the ball milling process. In order to ensure the same quality of the ball mill media loaded into the ball mill, stainless steel beads with diameters of 4 mm (0.26 g/piece), 6 mm (0.89 g/piece), 8 mm (2.1 g/piece) and 10 mm (4.16 g/piece) were used 80, 24, 10 and 5 for the experiment, respectively. Figure 7 shows the expansion volumes of GIC products obtained under different ball milling media size. The number of stainless-steel beads with a small diameter was large, and the striking force of each steel ball was small, but the number of strikes was large, and the grinding area was large. The number of stainless-steel beads with a large diameter was small, and the striking force of each steel ball was large, but the number of strikes was small, and the grinding area was small. Therefore, a good grinding effect can be achieved by choosing a suitable size of the ball milling media. It can be seen from Fig. 7 that when the diameter of the stainless-steel beads was 6 mm, the expansion volume of the GIC was the largest. This is because this size of the ball milling media, not only ensured the sufficient impact force, but also has more hit times and strong grinding effect. Hence, the diameter of 6 mm was selected. Expansion volumes of GIC products after thermal treatment obtained from different ball milling media sizes ball milling media sizes Effect of ball mill model In order to study the influence of different ball mill models on the preparation of GIC, experiments were carried out in four different models of ball mills according to the above optimal conditions. The experimental results were shown in Table 2. The ball mill manufacturers and models shown in the table are 1# (Changsha Tianchuang Powder Technology Co., Ltd. QM-3SP04), 2# (Changsha Miqi Instrument Equipment Co., Ltd. YXQM-2 L), 3# (Guangzhou Rurui Technology Co., Ltd. KEQ-2 L), and 4# (Guangzhou Rurui Technology Co., Ltd. QM3SP2), respectively. It can be seen from Table 2 that the expansion volumes of GIC prepared by using different types of ball mills under the same experimental conditions were basically the same. It can be seen that the production of GIC combined with mechanical force and chemical method described in this paper is stable and does no affected by the ball mill models. Table 2 Expansion volumes of GIC products after thermal treatment obtained under different ball mill models Mechanism discussion Scanning electron microscope (SEM) analysis Scanning electron microscope was used to observe the morphological of the GIC products obtained at optimum production conditions. It can be seen from Fig. 8 that GIC was composed of many bonded and superimposed graphite flakes [12]. The densely arranged graphite flakes were divided into graphite flakes with a thickness of several hundred nanometers, and there were obvious signs of bulging and swelling. This is due to the changes in the carbon layer structure caused by the intercalation agent entered between the graphite layers. Due to the intercalation effect, many honeycomb-like fine pores appear between the graphite flakes, and the pores were fusiform. The layered structure still exists, but fractures and voids appear between the lamellae. This is because the van der Waals force between the layers was destroyed and the distance between the lamellae increased significantly under the effect of the intercalation. X-ray diffraction (XRD) analysis In order to study the crystal structure change of the product before and after intercalation, the XRD pattern of the samples were measured. Figure 9 shows the XRD patterns of graphite raw materials, GIC and expanded graphite, respectively. The expandable GIC product was obtained at optimum production conditions. Expanded graphite was obtained by high temperature expansion of GIC at 950 ℃. It can be seen from Fig. 9 that natural flake graphite has two characteristic sharp peaks at. SEM images of GIC sample produced at the optimum conditions 2θ = 26.60° and 2θ = 54.76° [13], and the diffraction peak intensity is large, which is due to the regular arrangement of internal particles and high crystallinity. The intensity of the diffraction peaks of GIC were greatly weakened, and the peaks width were broadened. The d002 diffraction peak (2θ = 26.60°) was split into two diffraction peaks 2θ = 24.68° and 2θ = 28.32° and the d004 diffraction peak (2θ = 54.76°) was split into two diffraction peaks 2θ = 51.98° and 2θ = 56.02°. This is because after the flake graphite was intercalated, the distance between the graphite flakes increased and the crystal structure was damaged, which resulted the split and left shift of the diffraction angle and the weak of the diffraction intensity. It was known that the interlayer spacing can be calculated according to the Bragg Eq. 2dsinθ = nλ [14, 15]. It was known that λ = 1.54 nm under the test conditions, the interlayer spacing of GIC were 0.366 and 1.75 nm calculated by substituting 2θ = 24.68° and 2θ = 51.98° in Fig. 9 into the Bragg equation respectively, which is larger than the interlayer spacing of flake graphite by 0.335 and 1.67 nm respectively. This is due to the destruction of the structure of the graphite along the C axis direction, which indicated that the NaHCO3 intercalation agent has entered the interlayer of the graphite, and increased the interlayer distance of the layered graphite. The above results indicated that the intercalation agent has entered between the graphite layers, and GIC was prepared. The interlayer structure of the expanded graphite obtained after the expansion of GIC was partially destroyed due to the effect of the intercalator. The remaining undestroyed graphite crystallites still retain the original graphite structure, so the characteristic diffraction peaks of expanded graphite were basically the same as that of flake graphite. The diffraction peak intensity of expanded graphite was significantly weakened and the peak shape was sharp compared with the flake graphite, which indicated that the crystallites in the expanded graphite were further reduced, but still have graphite crystallites. XRD patterns of different samples Fourier transform infrared spectra (FT-IR) analysis As a relatively easy method, FT-IR spectroscopy has been widely used in GIC research, from which the direct structural information and changes can be obtained during various chemical treatments. Figure 10 shows the FT-IR patterns of graphite raw materials, expandable GIC obtained at optimum production conditions and expanded graphite, respectively. It can be seen that the three samples all have characteristic absorption peaks at 1582 cm-1 and 3428 cm-1. The absorption peak of 1582 cm-1 was belonged to the sp2 structure of graphite crystal C = C stretching vibration peak [16], indicated that the internal structure of the GIC and the expanded graphite layer has not changed, and the preparation method does not change the carbon-based structure. The absorption peak at 3428 cm-1 attributed to OH stretching vibration peaks, which is the trace moisture contained in the sample itself or KBr when pressed. In the infrared spectrum of GIC, there are strong characteristic peaks in 880 cm-1 and 1360 cm-1. The peak at 880 cm-1 was caused by the carbonate internal stretching vibration mode, and 1360 cm-1 was the absorption peak of carbonate internal stretching vibration mode [17]. The above results indicated that the presence of carbonate in GIC. It can be seen from the infrared spectrum of expanded graphite that the characteristic peak of carbonate was significantly weakened, indicated that the acid radical ions have decomposed to gas and escaped, but there was still a small amount of residue. These results further indicated that the preparation method does not change the carbon-based structure, and the NaHCO3 intercalation agent has entered the interlayer of the graphite, which increased the interlayer distance of the layered graphite. FT-IR patterns of different samples X-ray photoelectron spectrometer (XPS) To further analyse the elements of the GIC product, we used the XPS test. The experimental results are shown in the Fig. 11. From Fig. 11, The Bindding Energy at 282.55 ev and 530.33 ev were GIC's characteristic peaks which attributed to the C1s and O1s. C1s is mainly due to the carbon structure of GIC, and O1s is mainly due to the intercalator. Further quantitative calculations found that the carbon element content of the GIC product was 88.98%, and the oxygen element content was 11.02%. The experimental results show that our preparation method produces good GIC products with no impurities. This result of XPS is in good agreement with those of FT-IR and XRD. XPS patterns of GIC sample produced at the optimum conditions Production mechanism The schematic representation of the production mechanism can be seen as Fig. 12. The flake graphite is intercalated with NaHCO3 as an intercalant under the action of mechanical ball milling, and then NaHCO3 was decomposed at a suitable temperature under the protection of inert gas. The gas generated during the decomposed of NaHCO3 increases the interlayer spacing of the layered graphite, and the GIC product was obtained after washed and dried. Further research shown that the GIC prepared by this method has good thermal expansion properties. The method has the advantages of simple process, mild preparation conditions, no use of aggressive reagents, process stability, etc. Thus, it could be an alternative green and efficient method for GIC production in industry. Schematic representation of the production mechanism Process flow design of the preparation method According to the experimental results in this paper, we designed the process flow for GIC green production by the combined with mechanical force and chemical method. The specific process flow chart is shown in Fig. 13. After mixing the graphite and the intercalant in a certain ratio, perform mechanical ball milling at a set speed. After the set ball milling time was reached, the mixture was taken out, and then the mixture was thermally decomposed at a suitable temperature. After reached the decomposition time, expandable GIC can be obtained after cooled, washed and dried. Process flow diagram of preparation method This paper investigated a new method by the combination of mechanical force and chemical method to produce the GIC from graphite. The effect of production conditions on the thermal expansion performance of GIC products were investigated by single factor experiments, the optimal conditions were obtained as 6 mm diameter of the stainless-steel beads ball milling media, the ball mill speed 500 r/min for 4 h, the decomposition temperature 200 ℃ for 4 h, and the mass ratio of flake graphite to NaHCO3 was 1:1. Under optimized conditions, the expansion volume of GIC product was 142 mL/g. At the same time, the mechanism of preparation method was studied, and the preparation process was designed. The method has the advantages of simple process, products free of impurities, no use of aggressive reagents, process stable, etc. In general, the new method could be a green and potential method for GIC production in industry. All data generated or analysed during this study are included in this published article. Novoselov KS, Geim AK. Electric field effect in atomically thin carbon films. Science. 2004;306:666–9. Stoller MD, Park SJ, Zhu Y, An J, Ruoff RS. Graphene-based ultra-capacitors. Nano Lett. 2008;8:3498–502. Zhu JP. Study on physical properties of graphite sulfate intercalation compound. J Hefei Univ Technol Nat Sci. 2001;24(6):1158–62. Yin W. Study on preparation and properties of graphite intercalation composite. Jilin: Jilin University, 2003. Chen YP, Li SY, et al. Optimization of initial redox potential in the preparation of expandable graphite by chemical oxidation. New Carbon Mater. 2013;28(6):435–41. Yang YQ, Wang JD, et al. Preparation and study of expandable graphite electrochemical method. Fiber Composites. 1998;2:22–4. Shornikova O, Sorokina N, et al. The effect of graphite nature on the properies of exfoliated graphite doped with nickel oxided. J Phys Chem Solids. 2008;69(6):1168–70. Guo XQ, Huang J, et al. Preparation of graphene nanosheet functional material by ultrasonic stripping of secondary expanded graphite. Funct Mater. 2013;12(44):1800–3. Gao Y, Gu JL, et al. Bromine-graphite intercalation compound. Carbon Technol. 2000;4(109):21–5. Qiu T, Chen ZG. Study on the advanced treatment of oilfield wastewater by expanded graphite. J Jiangsu Polytech Univ. 2006; 18(4): 11–13. Zhang Y, Xu BZ. Study on the process parameters of CrO3 -graphite intercalation compound prepared by vacuum heat treatment. In: Proceedings of the tenth national heat treatment conference. 2011; 9: 842–845. Liu QQ, Zhang Y, et al. Study on the preparation of expanded graphite and its oil absorption performance. Non Metallic Mines. 2004;27(6):39–41. Wang J, Han ZD. The combustion behavior of polyacrylate estedgraphite oxide composites. Polym Adv Technol. 2006; 17(4):335–340. Chan WM, Wang JJ, et al. Preparation and characterization of graphene nanoplatelets by ultrasonic stripping. Dev Appl Mater. 2017;5(10):77–85. Shi DK. Material science foundation. 2nd ed. Beijing: Machinery Industry Press; 2003. p. 78–80. Ferrari CA, Robertson J. Roman spectroscopy in carbons: from nanotubes to diamond. Beijing: Chemical Industry Press; 2007. p. 193. Ferrari AC, Robertson J. Interpretation of Raman spectra of disordered and amorphous carbon. Phys Rev B. 2000;61(20):14095–107. XRD data, FT-IR data and XPS data were obtained using equipment maintained by Jilin Insititute of Chemical Technology Center of Characterization and Analysis. The authors acknowledge the assistance of JLICT Center of Characterization and Analysis. We would also like to thank Professor Zhang Long and Complex Utilization of Petro-resources and Biomass Laboratory for provided experimental instruments and equipment for some of the experiments in the early stage of the article. School of Petrochemical Technology, Jilin Institute of Chemical Technology, Jilin, 132022, China Xin Wang School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China Guogang Wang Jilin Provincial Engineering Laboratory for the Complex Utilization of Petro-resources and Biomass, School of Chemical Engineering, Changchun University of Technology, Changchun, 130012, Jilin, People's Republic of China Long Zhang XW conceived and designed the experiments. XW and GW conducted the experiments and interpreted the results. XW participated in analyze the data. XW wrote the paper, and was a major contributor in writing the manuscript. Long Zhang provided experimental instruments and equipment for some of the experiments in the early stage of the article. All authors read and approved the final manuscript. Correspondence to Xin Wang. All the authors have approved to submit the manuscript. Wang, X., Wang, G. & Zhang, L. Green and simple production of graphite intercalation compound used sodium bicarbonate as intercalation agent. BMC Chemistry 16, 13 (2022). https://doi.org/10.1186/s13065-022-00808-y Graphite intercalation compound Green production Mechanical force and chemical method Submission enquiries: [email protected]
CommonCrawl
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects Tianjiao Dai1 & Sanjay Shete1,2 In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design. Sequential, multiple assignment, randomized trial (SMART) designs and their analysis are being used to construct high-quality adaptive interventions that can be individualized by repeatedly adjusting the intervention(s) over time on the basis of individual progress [1–5]. The SMART design was pioneered by Murphy, building on the work of Lavori and Dawson [6, 7]. SMART designs involve an initial randomization of individuals to different intervention options, followed by re-randomization of some or all of the individuals to another set of available interventions at the second stage. At subsequent stages, the probability and type of intervention to which individuals are re-randomized may depend on the information collected from the previous stage(s) (e.g., how well the patient responded to the previous treatment; adherence to treatment protocol). Thus, there can be several adaptive interventions embedded within each SMART design. This allows for testing the tailoring variables and the efficacy of the interventions in the same trial. There are several practical examples of SMART studies that have been conducted (e.g., the CATIE trial [8] for antipsychotic medications in patients with schizophrenia, STAR*D for the treatment of depression, [9, 10] and phase II trials at MD Anderson for treating cancer [11]). The goal of these studies is to optimize the long-term outcomes by incorporating the participant's characteristics and intermediate outcomes evaluated during the intervention [12, 13]. An example of a two-stage SMART design is a study that characterized cognition in nonverbal children with autism [14]. To improve verbal capacity, participants were initially randomized to receive either a combination of behavioral interventions (Joint Attention Symbolic Play Engagement and Regulation (JASPER) + Enhanced Milieu Training (EMT)) or an augmented intervention (JASPER + EMT+ speech-generating device [SGD]). Children were assessed for early response versus slow response to the first-stage treatment at the end of 12 weeks. The second-stage interventions, administered for an additional 12 weeks, were chosen on the basis of the response status (only slow responders to JASPER + EMT were re-randomized to intensified JASPER + EMT or received the augmented JASPER + EMT + SGD; slow responders to JASPER + EMT + SGD received intensified treatment; all early responders continued on the same intervention). There were three pre-fixed assessment time points: at 12 weeks, 24 weeks and 36 weeks (follow-up), which were the same for all participants in the study. Compared to multiple, one-stage-at-a-time, randomized trials, SMART designs provide better ability to compare the impact of a sequence of treatments, rather than examining each piece individually. For example, a SMART allows us to detect possible delayed effects in which an intervention at a previous stage has an effect that is less likely to occur unless it is followed by a particular subsequent intervention option. The typical modeling approach for the SMART design as described by Nahum-Shani et al. includes the indicators of intervention at each stage as covariates and thus accounts for the delayed effects on the final response. In order to develop a sequence of best decision rules for each individual, various statistical learning methods of estimating the optimal dynamic treatment regimens have been proposed, among which Q-learning has been developed for assessing the relative quality of the intervention options and estimating the optimal (i.e., most effective) sequence of decision rules with linear regression. For a two-stage SMART, the Q-learning approach controls for the optimal second-stage intervention option when assessing the effect of the first-stage intervention, and reduces the potential bias resulting from unmeasured causes of both the tailored variables and the primary outcome. A similar approach for deriving the optimal decision rules for SMART is A-learning, which is more robust to model misspecification than Q-learning for consistent estimation of the optimal treatment regime [15]. Zhao et al. introduced the two learning methods of BOWL and SOWL, [16] which are based on directly maximizing over all dynamic treatment regimens (DTRs) a nonparametric estimator of the expected long-term outcome. As an alternative to the above learning approaches, Zhang et al. [17] proposed a robust estimation of the optimal dynamic treatment regimens for sequential treatment decisions, which maximizes a doubly robust augmented inverse probability weighted estimator for the population mean outcome over a restricted class of regimes. All these approaches model the outcomes of interest as dependent variables, and for the predictor variables, they model the main and interacting effects of the intervention options at each stage and the baseline individual characteristics. The amount of time an intervention is administered, however, is not explicitly modeled, although it can be used as a covariate in these regressions. There are examples of SMART designs in which a participant is assessed at several pre-fixed time points during the first-stage treatment and once he/she meets an assigned criterion for response status, he/she is re-randomized to the second stage of treatment. Such a SMART design has been applied to develop a dynamic treatment regime for individuals with alcohol dependence using the medication naltrexone [2, 18, 19]. At the beginning of the study, patients were randomized to either a stringent or a lenient criterion for early non-response. Initially, all patients received naltrexone. Starting at the end of the second week, patients who showed early response were assessed weekly for eight weeks, and those who met the assigned criterion for non-response were assigned to the second stage randomization in that week; whereas the responders were re-randomized at week eight. Another example of using a SMART design to evaluate multiple, fixed time points is the study of pharmacological and behavioral treatments for children with ADHD, where children were assessed monthly for response or non-response [19–23]. In addition, Lu et al. [19] developed repeated-measures piecewise marginal models for comparing embedded treatments in such SMART designs with multiple evaluations at fixed time points. In these studies, subjects were assessed at fixed time points; thus, the time of treatment takes values along a finite set of time points. Although, SMART designs with outcome assessments at fixed time points exist, there are advantages to administering a drug as soon as an individual achieves an intermediate response. For example, the smoking cessation drugs varenicline and bupropion can increase the risk of psychological side effects such as unusual changes in behavior, hostility, agitation, depressed mood and suicidal thoughts [24–26]. In addition, varenicline costs approximately $300 per month. Therefore, allowing the duration of treatment to vary among participants for one or more stages of the study may reduce the side effects and costs associated with the interventions. For such time-varying SMART designs, the duration of treatment plays an important role in decision making, and including it in the analysis may increase the power of the study and better serve our goal of analysis. To further extend the assignment strategies discussed in the above examples and utilize the information contained in the treatment duration, in this paper, we proposed a novel time-varying SMART design, which enables us to more efficiently assign different intervention options as soon as an individual achieves a set of intermediate response goals. Therefore, the time of treatment is a continuous random variable for each individual that can take any value on a subset of the positive real line, and is treated as an endogenous variable. The existing statistical methods are inappropriate for analyzing data obtained from such a time-varying SMART design. Therefore, to fully utilize the potential of this type of time-varying SMART design in making more efficient decisions, we also proposed two analytic approaches that can be used to analyze data from such a time-varying SMART design. The first approach is a linear mixed model with time-varying fixed effects [27, 28], which is in fact a piecewise linear model. The second approach incorporates a joint modeling method in which a survival model is fitted jointly with the linear mixed model [29]. We performed simulations to evaluate the statistical properties of both methods. Our simulation results showed that both methods estimated the expected final outcome for each embedded adaptive intervention in such design accurately, but the joint-modeling method provided better estimates for certain parameters in the model. To compare the power and cost efficiency of the time-varying SMART design to those of an analogous standard SMART design, we simulated two trials with identical sample sizes and intervention effects using (a) the time-varying SMART design and (b) the standard SMART design. These simulations showed that the time-varying SMART design is cost-efficient and has power similar to that of the standard SMART design in selecting the optimal embedded adaptive intervention. Proposed time-varying SMART design Figures 1 and 2 illustrate the proposed time-varying designs. Both two-stage time-varying SMARTs were designed to provide data regarding how the intensity and combination of two types of interventions might be adapted to a subject's progress in a cost- and time-efficient manner. Example of time-varying two-stage SMART design with equal probability allocation: each participant is randomized twice Example of time-varying two-stage SMART design with unequal probability allocation: only non-responders are re-randomized in the second stage In the first example (see Fig. 1), suppose medication (M) and behavioral intervention (B) are two initial intervention options for individuals who are heavy smokers (e.g., those who smoke more than or equal to 25 cigarettes per day). The number of cigarettes a subject smokes per day is the outcome of interest and is measured at the beginning of the study, at several intermediate time points and at the end of the study. Let Y 0 denote the number of cigarettes a subject smoked per day at the beginning of the study (t = 0). Subjects are randomly assigned to the medication or the behavioral interventions at the beginning of the study. Monitoring the outcome of interest begins at a pre-fixed time point (e.g. one week after the initial randomization and is denoted as t 00) after the initial intervention is implemented, and t 10 denotes the time point at which those who did not respond to a first-stage intervention are re-randomized. A subject is considered a responder to the first-stage intervention if there is a significant decrease in the number of cigarettes he/she smoked per day (e.g., the decrease in the number of cigarettes smoked per day is above a pre-fixed threshold, C) at an intermediate time point T 1, before t 10. Thus, T 1 is a random variable of time and varies among responders. A subject is classified as a non-responder if the decrease in the number of cigarettes he or she smoked per day by t 10 is below C. Therefore, all the non-responders are given the first-stage intervention for a fixed time period of t 10 (e.g., the first month of initial interventions), which can be seen as the right-censored time point. Let Y 1 denote the number of cigarettes smoked per day at the end of the first-stage intervention. An indicator variable δ is defined as δ = I(T 1 < t 10), where I(⋅) is the indicator function that takes the value 1 if T 1 < t 10 (i.e., if the subject is a responder) and the value 0 if T1 ≥ t10 (i.e., if the subject is a non-responder). A responder is re-randomized either to continue with the first-stage intervention (M or B) or to receive the first-stage intervention at a reduced intensity (M- or B-); whereas a non-responder is re-randomized to receive the first-stage option at an increased intensity (M+ or B+) or augmented with the other type of intervention (i.e., adding a behavioral intervention for those who started with medication or adding medication for those who started with a behavioral intervention). We let all the subjects in this design stay on their second-stage interventions for a fixed time period, Δt (e.g., one month). Therefore, for a subject whose first-stage intervention time is T 1, the total study time is T 1 + Δt, which we denote as T 2. For each participant, Y 2 is the final measurement of the number of cigarettes smoked per day at T 2; see Fig. 1). The design illustrated in Fig. 2 is similar to that in Fig. 1 except that all the responders continue with their first-stage intervention options (i.e., each responder receives the same intervention after the response time point T 1) in the second stage (see Fig. 2). The adaptive interventions that are embedded within the two SMART designs in Figs. 1 and 2 are listed in Additional file 1: Tables S1 and S2. Analytic approach Let A 1 and A 2 be the indicators of the first- and second-stage intervention options, respectively. For each individual, we observe the data (Y 0, A 1, T 1, Y 1, A 2, Y 2, T 2, δ). The outcomes of interest are the longitudinal measurements Y 0, Y 1, and Y 2, which are fitted with a linear mixed model, assuming they share the same random intercepts at the subject level. Because the intervention options and their durations change over time in this design, we first proposed a straightforward time-varying mixed effects model (TVMEM) to analyze the outcomes. In this approach, the duration of time a treatment is administered is used as a covariate in the model. Such an approach is better than the approaches that ignore the time component of the intervention (i.e., the duration of the intervention influences its effect). However, the time duration is a random variable and one may gain statistical efficiency by treating it as a dependent variable in the modeling. Therefore, we also proposed a joint-modeling approach that simultaneously postulates a linear mixed effects model for the longitudinal measurements Y = (Y 0, Y 1, Y 2) and a Cox model for the survival time T 1. In particular, we fit a survival submodel for T 1 jointly with the previously mentioned TVMEM that will efficiently extract the information contained in T 1. Analytic models Time-varying mixed effects model of Y = (Y 0, Y 1, Y 2) A linear TVMEM is fitted to the longitudinal outcomes, with interventions and their interactions and durations included as predictors. For each individual i in the study, we have $$ \begin{array}{c}{Y}_i(t)={m}_i(t)+{\varepsilon}_i(t)={Z}_i\eta (t)+{X}_i(t)\beta (t)+{b}_i+{\varepsilon}_i(t)\\ {}\kern3.5em ={Z}_i\eta (t)+{\beta}_0(t)+{\beta}_1(t){A}_{1i}(t)+{\beta}_2(t){A}_{2i}(t)+{\beta}_3(t)t+{\beta}_4(t){A}_{1i}(t)\cdot {A}_{2i}(t)+{b}_i+{\varepsilon}_i(t)\end{array} $$ where m i (t) is the unobserved true value of the longitudinal outcome at time point t, and b i is the subject-level random effects and is assumed to be normally distributed with a mean of zero and variance of σ 2 b ; Z i is a vector of the baseline covariates (e.g., age, sex, comorbidities, etc.) with a corresponding vector of the regression coefficients η(t); X i (t) is the vector of the first-stage and second-stage intervention options, their interactions, and duration of intervention with a corresponding vector of the regression coefficients β(t). Finally, ε i (t) is the error term at time t and is assumed to be normally distributed and independent of b i . In our study design, we consider three time points at which the outcomes of interest are measured: t = 0, T 1i and T 2i , where T 1i and T 2i are the respective time points at which individual i completes the first- and second-stage interventions. Therefore, A 1i (t) takes the value of A 1i at times T 1i and T 2i and is equal to 0 at t = 0, and A 2i (t) takes the value of A 2i at T 2i and is equal to 0 at time points 0, and T 1i . In this way, η(t) and β(t) are piecewise linear fixed coefficients; therefore, model (1) at the three time points is equivalent to the following three linear mixed-effects submodels: $$ \begin{array}{c}{Y}_{0i}={Y}_i(0)={m}_i(0)+{\varepsilon}_i(0)\\ {}={Z}_i\eta (0)+{X}_i^T(0)\beta (0)+{b}_i+{\varepsilon}_i(0)\\ {}={Z}_i{\eta}_0+{\beta}_{00}+{b}_i+{\varepsilon}_{0i}\end{array} $$ $$ \begin{array}{c}{Y}_{1i}=Y{\left({T}_{1i}\right)}_i={m}_i\left({T}_{1i}\right)+{\varepsilon}_i\left({T}_{1i}\right)\\ {}={Z}_i\eta \left({T}_{1i}\right)+{X}_i^T\left({T}_{1i}\right)\beta \left({T}_{1i}\right)+{b}_i+{\varepsilon}_i\left({T}_{1i}\right)\\ {}={Z}_i{\eta}_1+{\beta}_{01}+{\beta}_{11}{A}_{1i}+{\beta}_{31}{T}_{1i}+{b}_i+{\varepsilon}_{1i}\end{array} $$ $$ \begin{array}{c}{Y}_{2i}={Y}_i\left({T}_{2i}\right)={m}_i{T}_{2i}+{\varepsilon}_i\left({T}_{2i}\right)\\ {}={Z}_i\eta \left({T}_{2i}\right)+{X}_i^T\left({T}_{2i}\right)\beta \left({T}_{2i}\right)+{b}_i+{\varepsilon}_i\left({T}_{2i}\right)\\ {}={Z}_i{\eta}_2+{\beta}_{02}+{\beta}_{12}{A}_{1i}+{\beta}_2{A}_{2i}+{\beta}_{32}{T}_{2i}+{\beta}_4{A}_{1i}\cdot {A}_{2i}+{b}_i+{\varepsilon}_{2i}\\ {}={Z}_i{\eta}_2+{\beta}_{02}+{\beta}_{12}{A}_{1i}+{\beta}_{22}{A}_{2Ri}+{\beta}_{23}{A}_{2NRi}+{\beta}_{32}{T}_{2i}+{\beta}_{41}{A}_{1i}\cdot {A}_{2Ri}+{\beta}_{42}{A}_{1i}\cdot {A}_{2NRi}+{b}_i+{\varepsilon}_{2i}\end{array} $$ where in equations (2) through (4), Y 0i , Y 1i and Y 2i are the outcome values at time 0, T 1i and T 2i , respectively; A 1i is the indicator of the first-stage intervention options (−1 for M and +1 for B), A 2i = (A 2Ri , A 2NRi ) is the indicator vector for the second-stage intervention options, where A 2Ri is the indicator for the second-stage intervention options for the responders to the first-stage intervention (1 = continue the initial intervention; −1 = reduce the intensity of the initial intervention) and A 2NRi is the indicator for the second-stage intervention options for the non-responders (1 = increase the initial intervention; −1 = augment the initial intervention with the other type of intervention), with A 2Ri =0 for non-responders and A 2NRi =0 for responders. A 1i ⋅ A 2Ri and A 1i ⋅ A 2NRi are the interaction effects of the first-stage intervention and second-stage intervention among responders and non-responders, respectively, in the submodel of Y 2i (i.e., submodel (4)). Parameters η 0, η 1, η 2 and β 00, β 01, β 02 are the coefficients of the baseline covariates and intercepts at time points 0, T 1i and T 2i , respectively; submodel (2) includes only baseline covariates as predictors for the outcomes at the beginning of the study (i.e., Y 0i at t = 0); submodel (3) models the outcome of interest at the intermediate time point of the study (i.e., Y 1i at t= T 1i ) and includes covariates A 1i and T 1i , for which the corresponding coefficients β 11 and β 31 account for the direct effect of A 1i and indirect effects through T 1i on Y 1i ; submodel (4) includes all the main and interacting effects of the intervention options at each stage and the duration T 2i (T 2i = T 1i + Δt) as predictors, for which the coefficients β 12 and β 32 account for the delayed effect of A 1i and delayed indirect effects of A 1i through T 2i . The coefficients β 2 = (β 22, β 23) and β 4 = (β 41, β 42) account for the effects of the second-stage interventions and the effects of their interactions with the first-stage interventions on the final outcome Y 2i (measured at the end of the study, T 2i ). The conditional expectations for models (1)-(4) are provided in Additional file 2. We also provided conditional expectations of the final outcomes for each of the eight embedded adaptive interventions in the SMART design of Fig. 1 and four embedded adaptive interventions in the SMART design of Fig. 2 [see Additional file 2]. Joint model In addition to the TVMEM, we postulate a relative risk model for T 1i (time to the event of interest) as $$ {h}_i(t)={h}_0(t) exp\left\{{\gamma}_1{A}_{1i}+{\gamma}_2{W}_i+\alpha {m}_i(0)\right\} $$ where W i is a vector of the baseline covariates, which could be different from vector Z i in model (1), and h 0(⋅) is the baseline risk function. The underlying longitudinal measurement m i (0) at baseline (i.e., at time point t = 0), as approximated by the TVMEM, and at the first-stage intervention A 1i are included as predictors in model (5) because the time point at which an individual responds to the first-stage intervention (i.e., T 1i ) depends only on the type of first-stage intervention the subject received and the baseline characteristics. We jointly estimate the coefficients in models (1) and (5) by using the maximum likelihood estimation method. To define the joint distribution of the time-to-event and longitudinal outcomes, we assume that the random effect b i underlies both the longitudinal and survival processes for each subject. This means that the random effect accounts for both the association between the longitudinal and event outcomes and the correlation between the repeated measurements in the longitudinal process. We also assume that the longitudinal outcomes {Y 0i , Y 1i , Y 2i } are independent of the time T 1i conditional on the random effect b i . Therefore, the joint likelihood contribution for the ith subject can be formulated as p(T 1i , δ i , Y i ; θ) = \( {\displaystyle \int p\left({T}_{1i},{\delta}_i\Big|{b}_i;\beta, \gamma, \alpha, \eta \right)}\left[{\displaystyle \prod_jp}\left\{{Y}_i\left({t}_{ij}\right)\Big|{b}_i;\beta, \eta \right\}\right]p\left({b}_i;{\sigma}_b\right)db \), where p{Y i (t ij )|b i ; β, η} is the univariate normal density for the longitudinal responses at time point t ij , which is the element from the vector t i = {t si } 2 s = 0 = {0, T 1i , T 2i }; p(b i ; σ b ) is the normal density with standard deviation σ b for the random effects b i ; and p(T 1i , δ i |b i ; β, γ, α, η) is the likelihood for the time to the intermediate outcome and can be written as p(T 1i , δ i |b i ; β, γ, α, η) = \( {\left\{{h}_i\left({T}_{1i}\Big|{m}_i(0);\beta, \gamma, \alpha, \eta \right)\right\}}^{\delta_i}\cdot \) S i (T 1i |m i (0), A 1i ; β, γ, α, η) = \( {\left\{{h}_i\left({T}_{1i}\Big|{m}_i(0);\beta, \gamma, \alpha, \eta \right)\right\}}^{\delta_i}\cdot \) \( \exp \left\{-{\displaystyle {\int}_0^{T_{1i}}{h}_i\left(s\Big|{m}_i(0);\beta, \gamma, \alpha, \eta \right)}ds\right\} \), where δ i = I(T 1i < t 10). Parameters in the model are estimated by maximizing the corresponding log-likelihood function with respect to (β, γ, α, η). We obtained the maximum likelihood estimates using the R package "JM" [30]. The parameters (β 12, β 22, β 23, β 32, β 41, β 42) in submodel (4) (i.e., the model of final outcome Y 2) are of primary interest and were estimated using the two approaches described above. The data organization and implementation of these methods is presented in Additional file 3. For the example illustrated in Fig. 1, we considered two simulation scenarios in which Y 0 and Y 1 were simulated using submodels (2) and (3), respectively, and Y 2 was simulated with and without the interaction terms (A 1i ⋅ A 2Ri and A 1i ⋅ A 2NRi ) in submodel (4). In both scenarios, we simulated 500 replicates of n = 1000 individuals, and randomly assigned subjects (with probability .5) to one of the two first-stage interventions (i.e., A 1 to be equal to 1 [behavioral intervention] or −1 [medication]). Responders and non-responders to the initial interventions were then re-randomized (with probability .5) to one of the corresponding second-stage intervention options (i.e., A 2R and A 2NR were randomly assigned to be 1 or −1 and A 2R =0 for non-responders and A 2NR =0 for responders; see Fig. 1). In both scenarios, the random effects {b i } n i = 1 for subjects i = 1, 2, …, n were generated from the normal distribution with a mean of 0 and a standard deviation of 5, and baseline outcomes {Y 0i } n i = 1 were simulated using submodel (2) with parameters β 00 = 10 and ε 0i ~ N(0, 42). The intermediate outcomes {Y 1i } n i = 1 were simulated using submodel (3) with parameters β 01 = 1, β 11 = 0.2, and β 31 = 0.1 in the first scenario; whereas outcomes {Y 1i } n i = 1 in the second scenario were simulated with β 01 = 1, β 11 = 0.6, and β 31 = 0.1, with a standard deviation of 5 (i.e., ε 1i ~ N(0, 52) in both scenarios and satisfying the conditions Y 0i − Y 1i ≥ 9 (C = 9) if subject i is a responder and Y 0i − Y 1i < 9 if subject i(i = 1, 2, … n) is a non-responder. The time points T 1i were generated from a left-truncated Weibull distribution (truncated from t 00 =0.1, the start time for monitoring), with shape = 1 and scale= exp{γ 0 + γ 1 A 1i + αm i (0)}, where γ 0 = − 1.5, γ 1 = 0.4, and α = 0.25, and those for whom T 1i was greater than 1 (non-responders), were assigned T 1i = t 10 = 1 (the maximum time the first-stage intervention is administered [t 10]). The indicator of response status was then defined by δ i = I(T 1i < 1). The final outcomes Y 2i (i = 1, …, n) were generated using submodel (4), with ε 2i ~ N(0, 52). The values of the other parameters in submodel (4) are reported in Table 1 (without interactions) and Table 3 (with interactions). Table 1 Simulation results for the design in Fig. 1: the estimated means, based on 500 replicates, are reported for coefficients in model (4) Table 2 Simulation results for the design in Fig. 1: the estimated means, based on 500 replicates, are reported for the final outcomes of the eight adaptive interventions embedded in the design For the intervention strategy depicted in Fig. 1, there are eight adaptive interventions imbedded in the design and represented by the three indicators A 1, A 2R , and A 2NR . For example, in adaptive intervention (A 1, A 2R , A 2NR ) = (−1, 1, 1), participants are initially randomized to the medication (A 1 = − 1); those who respond are re-randomized to continue on the medication (A 2R = 1) and those who do not respond are re-randomized to increased medication (A 2NR = 1). Another example of an adaptive intervention is (A 1, A 2R , A 2NR ) = (1, 1, −1), in which participants are initially randomized to a behavioral intervention (A 1 = 1); those who respond are re-randomized to continue on the behavioral intervention (A 2R = 1), and those who do not respond are re-randomized to an augmented arm (M + B, A 2NR = − 1). For the design in Fig. 2, only the non-responders are re-randomized in the second stage. Therefore, there are four embedded adaptive interventions in this design, which are represented by the vector of two indicators (A 1, A 2NR ). For example (−1,−1) represents the adaptive intervention in which participants are initially randomized to medication (A 1 = − 1) and those who do not respond are re-randomized to the augmented arm (M + B, A 2NR = − 1), whereas responders continue on the medication arm. Using this design, we also simulated the treatment of 1000 subjects. However, instead of using equal probability allocations as in Fig. 1, we used unequal probability allocations at both stages. Specifically, each of the 1000 subjects were initially assigned to either A 1 = − 1 (medication) or A 1 =1 (behavioral intervention) with probabilities 0.4 and 0.6, respectively. Then, the non-responders were re-allocated into either A 2NR = − 1 (augmented first-stage intervention, M + B) or A 2NR =1 (intensified first-stage intervention, M+ or B+) with probabilities 0.55 and 0.45, respectively; whereas all responders were continued on their initial interventions (therefore, A 2R =0) in their second stage. Random effects (b i ), errors (ε i ), and longitudinal outcomes (Y 0i , Y 1i (i = 1, …, n)) were generated as described for Fig. 1. The final outcomes, Y 2i (i = 1, …, n), were also generated using submodel (4), but without the variable A 2Ri , with the parameter values reported in Tables 5 and 7 for the two scenarios, respectively. In the first scenario, outcomes Y 2i (i = 1, …, n) were simulated without interaction terms and with the parameter values shown in Table 5; in the second scenario, outcomes Y 2i (i = 1, …, n) were simulated with interaction terms and with the parameter values shown in Table 7. Table 3 Simulation results for the design in Fig. 1: the estimated means, based on 500 replicates, are reported for coefficients in model (4) with interactions Table 4 Simulation results for the design in Fig. 1: the estimated means, based on 500 replicates, are reported for the final outcomes of the eight adaptive interventions embedded in the design with interactions in model (4) An alternate simulation approach For the design illustrated in Fig. 1, we performed an alternate simulation approach that does not simulate values for T 1i from the Weibull distribution. Instead, we considered a situation in which values of Y 1i are monitored and T 1i is the value for which the Y 1i crosses the pre-specified boundary condition for the first time. In this simulation approach, random effects {b i } n i = 1 and error terms ε 0, ε 1 and ε 2 were all simulated the same way as described above. Baseline outcomes {Y 0i } n i = 1 are simulated using submodel (2) with β 00 = 2 and ε 0i ~ N (0, 22). Furthermore, we defined an individual i, as a responder if he/she had a certain percentage reduction in the intermediate outcome value, Y 1i , compared to his/her baseline value Y 0i . This may be a more appropriate definition of responders in some practical scenarios than a simple reduction by a fixed amount (e.g., C = 9) as was used in the previous simulations. In this simulation, those with a 40 % reduction from their baseline values were considered responders. The parameter values used for submodel (3) were β 01 = −2, β 11 = −0.5, and β 31 = 5 . For an individual i, we first simulated ε 1i ~N (0, 22), and calculated T *1i for which the β 01 + β 11 A 1i + β 31 T *1i + b i + ε 1i equals the 40 % reduction from Y 0i , the baseline value. Therefore, we define T 1i = t 00, if T *1i < t 00; T 1i = T *1i , if t 00 ≤ T *1i ≤ t 10; and T 1i = t 00, if T *1i ≥ t 10. Then, T 1i is substituted in the right side of equation (3) to obtain the value of Y 1i for the individual i(i = 1, …, n). The final outcomes Y 2i (i = 1, …, n) were generated using submodel (4), with ε 2i ~ N (0, 22). As previously, we simulated 500 replicates of n = 1000 individuals in each trial, and randomly assigned subjects (with probability 0.5) to one of the two first-stage interventions (i.e., A 1 to be equal to 1 [behavioral intervention] or −1 [medication]). Responders and non-responders to the initial interventions were then re-randomized (with probability 0.5) to one of the corresponding second-stage intervention options (i.e., A 2R and A 2NR were randomly assigned to be 1 or −1 and A 2R =0 for non-responders and A 2NR =0 for responders; see Fig. 1). We evaluated the performance of our two proposed analytic approaches in these simulated data sets by measuring the (a) means of the estimates of each of the adaptive interventions embedded in the design, (b) parameter estimates in the model, (c) mean squared error (MSE), (d) estimated coverage probability of the 95 % confidence interval, and (e) length of the confidence interval. Using these simulations parameters, we simulated two trials with identical sample sizes: (a) the time-varying SMART design and (b) the standard SMART design. We evaluated the performance of the time-varying SMART design and an analogous standard SMART design by measuring the (a) power to select the optimal embedded intervention, and (b) associated cost. Tables 1-4 show the results of the two simulation scenarios based on the design shown in Fig. 1. Similarly, Tables 5-8 show the results of the two simulation scenarios for the design in Fig. 2. In Table 1 the true parameters were the coefficient of the first-stage interventions, β 12 = 0.4; coefficient of the second-stage intervention for responders, β 22 = 0.5; coefficient of the second-stage intervention for non-responders, β 23 = 0.5; and coefficient of T2, the total time of the first- and second-stage interventions, β 32 = 2. The estimates obtained using TVMEM were \( {\widehat{\beta}}_{12}=0.275,\kern0.5em {\widehat{\beta}}_{22}=0.503,\kern1em {\widehat{\beta}}_{23}=0.501, \) and \( {\widehat{\beta}}_{32}=4.073 \), while the estimates obtained using the joint model were \( {\tilde{\beta}}_{12}=0.407,\kern0.5em {\tilde{\beta}}_{22}=0.503,\kern0.5em {\tilde{\beta}}_{23}=0.502, \) and \( {\tilde{\beta}}_{32}=1.790 \). Both approaches estimated coefficients β 22 and β 23 accurately. The parameters β 12 and β 32 were estimated accurately using the joint model, but poorly using the TVMEM. Similarly, in terms of the MSE, the length of the 95 % confidence interval, and the estimated coverage probability of the 95 % confidence interval, both approaches performed similarly for estimating β 22 and β 23, but joint modeling performed better for estimating β 12 and β 32. For example, for β 12, the estimated coverage probability obtained using the TVMEM was 88 %; whereas that obtained from the joint model was 97.8 %. For each of the eight embedded adaptive interventions in the design, Table 2 shows that both approaches accurately estimated the means of the final outcome, E[Y 2|(A 1, A 2R , A 2NR )]. For example, the simulated means of the adaptive interventions (A 1, A 2R , A 2NR ) = (−1, −1, − 1), (−1, 1, 1), and (1, 1, 1) were 4.538, 5.536, and 6.564, respectively, and the estimated means were 4.543, 5.531, and 6.569, respectively, using the TVMEM and joint model. Tables 3 and 4 show results similar to those in Tables 1 and 2, respectively. In Table 3, the coefficient of interaction of the first-stage interventions and second-stage interventions among responders is denoted by β 41, and the coefficient of interaction of the first-stage interventions and second-stage interventions among non-responders is denoted by β 42. As shown in Table 3, both TVMEM and joint modeling accurately estimated parameters β 22, β 23, β 41, and β 42, with little difference in the MSE, estimated coverage probability, and length of the 95 % confidence interval. However, as in Table 1, the joint modeling approach estimated β 12 and β 32 more accurately than the TVMEM approach. For example, the true coefficient of T 2 was β 32 = 2.0, which was poorly estimated as 4.122 using the TVMEM and estimated as 1.626 using the joint model. Table 4 shows that the estimated means of the eight adaptive interventions obtained from both analytical approaches were identical and close to the simulated means up to the third decimal. Similar trends were observed in Tables 5-8 for the two simulations of Fig. 2. β 12 and β 32 were better estimated using the joint modeling approach, whereas all the other parameters and the means of the final outcomes of the four adaptive interventions embedded in the design were accurately estimated using both approaches. In Table 5, the true coefficient values of β 12 =0.450 and β 32 =2.0 were estimated as \( {\widehat{\beta}}_{12}=0.388 \) and \( {\widehat{\beta}}_{32}=4.046 \) using the TVMEM, and as \( {\tilde{\upbeta}}_{12}=0.456 \) and \( {\tilde{\upbeta}}_{32}=1.767 \) using the joint model. Coefficient β 23 was accurately estimated using both models. As for the four adaptive interventions (i.e. (A 1, A 2NR ) = (−1,1), (−1,-1), (1, 1) and (1, −1)) embedded in the design of Fig. 2, Table 6 shows that the simulated means were 5.213, 4.802, 6.330, and 5.805, respectively, and the estimated means were 5.228, 4.790, 6.344, and 5.793, respectively, using the TVMEM, and 5.230, 4.788, 6.345, and 5.792, respectively, using the joint model. Table 6 Simulation results for the design in Fig. 2: the estimated means, based on 500 replicates, are reported for the final outcomes of the four adaptive interventions embedded in the design In Table 7 shows that the true parameters β 12 =0.40 and β 32 =2.0 were respectively estimated as \( {\widehat{\upbeta}}_{12} \) =0.298 and \( {\widehat{\beta}}_{32} \) =4.308 using the TVMEM, and as \( {\tilde{\upbeta}}_{12} \) =0.408 and \( {\tilde{\upbeta}}_{32} \) =1.784 using the joint model. The other two parameters, β 23 and β 42, were accurately estimated using both approaches. Table 8 shows that the means were accurately estimated using both approaches. Table 8 Simulation results for the design in Fig. 2: the estimated means, based on 500 replicates, are reported for the final outcomes of the four adaptive interventions embedded in the design with interactions in model (4) Tables 9 and 10 show the results from the alternative simulation strategy. In Table 9, the true coefficient values of β 12 = −0.6 and β 32 = −1.5 were estimated as \( {\widehat{\upbeta}}_{12} \) = −0.534 and \( {\widehat{\beta}}_{32} \) = −2.367 using the TVMEM, and as \( {\tilde{\upbeta}}_{12} \) = −0.608 and \( {\tilde{\upbeta}}_{32} \) = −1.338 using the joint model. Coefficients β 22, β 23 and the means of the final outcomes of the eight adaptive interventions embedded in the design were accurately estimated using both approaches (Table 10). Table 9 Simulation results from the alternative simulation approach: the estimated means, based on 500 replicates, are reported for coefficients in model (4) Comparison of power between the time-varying SMART design and the standard SMART design We analyzed the time-varying SMART design's ability to select the most optimal embedded intervention and compared the associated power to that of the standard SMART design. We performed the comparison by conducing two trials with identical sample sizes and intervention effects using (a) the time-varying SMART design and (b) the standard SMART design. Figure 3 represents the standard SMART design that is analogous to the time-varying SMART design depicted in Fig. 1. The major difference between the two designs is that in the time-varying SMART design, a responder is re-randomized to the second-stage intervention at a random response time T 1 (< t 10); whereas in the standard SMART design, everyone is re-randomized at a fixed time point t 10. Responders are defined similarly in both designs. In our example, a subject is considered a responder to the first-stage intervention if there is a significant decrease in the number of cigarettes the person smoked per day. The second-stage intervention is identical for both designs. Example of standard SMART design with equal probability allocation: each participant is randomized twice For both designs, we calculated the percentage of times that the best embedded intervention is selected (i.e., the power of the design). We simulated six parameter scenarios: the true parameters for the coefficient of the first-stage interventions, β 21; coefficient of the second-stage intervention for responders, β 22; coefficient of the second-stage intervention for non-responders, β 23; coefficient of T 2, the total time of the first- and second-stage interventions, β 32; coefficient of interaction of the first-stage interventions and second-stage interventions among responders β 41; and the coefficient of interaction of the first-stage interventions and second-stage interventions among non-responders β 42. The simulated values of each of these parameters are reported in Tables 11 and 12. The simulation results are based on 500 replicates and are shown in Table 11 for comparing the two designs in Fig. 1 (time-varying SMART) and Fig. 3 (analogous standard SMART). Overall, both designs were equally effective in selecting the optimal embedded adaptive intervention. For example, when β 21 = 0.4, β 22 = 0.5, β 23 = 0.5 and β 32 = 2, using the joint model and implementing the time-varying SMART design showed 82.8 % power to select the optimal embedded adaptive intervention; whereas the power associated with the standard SMART design was 83.0 %. Similar results were obtained when comparing the time-varying SMART design in Fig. 2 and the standard SMART design in Fig. 4 (see Table 12). Table 10 Simulation results from the alternative simulation approach: the estimated means, based on 500 replicates, are reported for the final outcomes of the eight adaptive interventions embedded in the design Table 11 Power to select the optimal embedded adaptive intervention strategy for designs in Figs. 1 and 3 Example of standard SMART design: only non-responders are re-randomized in the second stage Comparison of the cost associated with conducting the time-varying SMART design versus that associated with conducting the standard SMART design To assess the cost associated with the conducting trials using these two competing designs, we considered a linear cost function for both SMART designs. Let c 1 and c 2 be the cost of the medication (M) and behavioral intervention (B), respectively. Additionally, we assumed that the reduced and increased intensity of the first-stage intervention are at half and twice the cost of the first-stage intervention, respectively, and that augmentation of the first-stage intervention in the second stage (M + B) has the cost c 1 + c 2. Using these parameters, the cost for the time-varying SMART design in Fig. 1 is $$ \begin{array}{c} cost={c}_1\left({\displaystyle \sum_{A_{1i}=M}{T}_{1i}}\right)+{c}_1\left({\displaystyle \sum_{A_{2i}=\mathrm{M}}\varDelta \mathrm{t}}\right)+\left(\frac{c_1}{2}\right)\left({\displaystyle \sum_{A_{2i}=M-}\varDelta \mathrm{t}}\right)+\left(2{c}_1\right)\left({\displaystyle \sum_{A_{2i}=M+}\varDelta \mathrm{t}}\right)+\\ {}\kern5em {c}_2\left({\displaystyle \sum_{A_{1i}=B}{T}_{1i}}\right)+{c}_2\left({\displaystyle \sum_{A_{2i}=\mathrm{B}}\varDelta \mathrm{t}}\right)+\left(\frac{c_2}{2}\right)\left({\displaystyle \sum_{A_{2i}=B-}\varDelta \mathrm{t}}\right)+\left(2{c}_2\right)\left({\displaystyle \sum_{A_{2i}=B+}\varDelta \mathrm{t}}\right)+\left({c}_1+{c}_2\right)\left({\displaystyle \sum_{A_{2i}=M+B}\varDelta \mathrm{t}}\right),\end{array} $$ and the cost for the corresponding standard SMART in Fig. 3 is $$ \begin{array}{l} cost={c}_1\left({\displaystyle \sum_{A_{1i}=M}{t}_{10}}\right)+{c}_1\left({\displaystyle \sum_{A_{2i}=M}\varDelta \mathrm{t}}\right)+\left(\frac{c_1}{2}\right)\left({\displaystyle \sum_{A_{2i}=M-}\varDelta \mathrm{t}}\right)+\left(2{c}_1\right)\left({\displaystyle \sum_{A_{2i}=M+}\varDelta \mathrm{t}}\right)+\\ {}\kern5em {c}_2\left({\displaystyle \sum_{A_{1i}=B}{t}_{10}}\right)+{c}_2\left({\displaystyle \sum_{A_{2i}=B}\varDelta \mathrm{t}}\right)+\left(\frac{c_2}{2}\right)\left({\displaystyle \sum_{A_{2i}=B-}\varDelta \mathrm{t}}\right)+\left(2{c}_2\right)\left({\displaystyle \sum_{A_{2i}=B+}\varDelta \mathrm{t}}\right)+\left({c}_1+{c}_2\right)\left({\displaystyle \sum_{A_{2i}=M+B}\varDelta \mathrm{t}}\right).\end{array} $$ Similarly, the cost for the time-varying SMART in Fig. 2 is $$ \begin{array}{c} \cos t={c}_1\left({\displaystyle \sum_{A_{1i}=M}{T}_{1i}}\right)+{c}_1\left({\displaystyle \sum_{A_{2i}=M}\varDelta \mathrm{t}}\right)+\left(2{c}_1\right)\left({\displaystyle \sum_{A_{2i}=M+}\varDelta \mathrm{t}}\right)+\\ {}\kern5em {c}_2\left({\displaystyle \sum_{A_{1i}=B}{T}_{1i}}\right)+{c}_2\left({\displaystyle \sum_{A_{2i}=B}\varDelta \mathrm{t}}\right)+\left(2{c}_2\right)\left({\displaystyle \sum_{A_{2i}=B+}\varDelta \mathrm{t}}\right)+\left({c}_1+{c}_2\right)\left({\displaystyle \sum_{A_{2i}=M+B}\varDelta \mathrm{t}}\right),\end{array} $$ $$ \begin{array}{l} cost={c}_1\left({\displaystyle \sum_{A_{1i}=M}{t}_{10}}\right)+{c}_1\left({\displaystyle \sum_{A_{2i}=M}\varDelta \mathrm{t}}\right)+\left(2{c}_1\right)\left({\displaystyle \sum_{A_{2i}=M+}\varDelta \mathrm{t}}\right)+\\ {}\kern4em {c}_2\left({\displaystyle \sum_{A_{1i}=B}{t}_{10}}\right)+{c}_2\left({\displaystyle \sum_{A_{2i}=B}\varDelta \mathrm{t}}\right)+\left(2{c}_2\right)\left({\displaystyle \sum_{A_{2i}=B+}\varDelta \mathrm{t}}\right)+\left({c}_1+{c}_2\right)\left({\displaystyle \sum_{A_{2i}=M+B}\varDelta \mathrm{t}}\right).\end{array} $$ Note that in the above equations, T 1i = t10 for non-responders, and \( \left({c}_1+{c}_2\right)\left({\displaystyle \sum_{A_{2i}=M+B}\varDelta \mathrm{t}}\right) \) is the cost of the second stage for all the subjects assigned to the intervention M + B. Figure 5 shows the cost as a function of c 1 and c 2, where red represents the cost of the time-varying SMART design and blue represents the cost of the standard SMART design. We can see that the cost of the time-varying SMART is less than the cost of the standard SMART in all scenarios. Table 13 shows the average costs and standard deviations calculated at select values of c 1 and c 2 based on 1000 replicates. For example, when the unit costs are c 1 = 2 and c 2 = 1 for medication and behavioral intervention, the average cost of the time-varying SMART in Fig. 1 is 3446.5, with standard deviation 49.87, while the average cost of the corresponding standard SMART is 3935.8, with standard deviation 41.47. Thus, the cost of the standard SMART is about 12 % higher than that of the time-varying SMART in this scenario. The cost associated with implementing a standard SMART (blue) and equivalent time-varying SMART (red) Table 13 Examples of the average cost for time-varying SMART and the standard SMART In the standard SMART design, the timing of allocating the intervention is generally ignored, which leads to a model of regression without the predictor of a time variable. Therefore, in this article, we proposed a time-varying SMART design that allows the re-randomization to the second-stage interventions to occur at different time points for different individuals. The two modeling approaches we proposed for analyzing data using such time-varying SMART designs provided good estimations of the means of the final outcomes of all the embedded interventions. However, the joint modeling approach provided more accurate parameter estimates and higher estimated coverage probability than the TVMEM, and thus we recommend the joint model for analyzing data generated from time-varying SMART designs. In the examples illustrated in Figs. 1 and 2, a participant was defined as a responder if there was a significant decrease in the number of cigarettes the participant smoked per day. One may question the validity of re-randomizing individuals who have a quick response to the first-stage intervention because such a response indicates the effectiveness of the intervention. However, if significant adverse effects are associated with the intervention (e.g., radiation therapy for many types of cancer is commonly associated with skin damage [31], fatigue [32, 33], diarrhea [34, 35], and rectal bleeding [36]), it is reasonable to shorten the duration of the intervention to avoid side effects. Therefore, the allocation strategy for the responders in the examples of the time-varying SMART design makes it more efficient than the standard SMART design. We proposed two approaches for analyzing the longitudinal outcomes obtained from the time-varying SMART design: the TVMEM and the joint model. According to the simulation results, the joint modeling approach better estimated the effects of the duration of the intervention (i.e., T 2) and the first-stage interventions (i.e., A 1) in model (4). More specifically, the joint modeling approach had more accurate estimates, smaller MSEs, higher estimated coverage probabilities, and smaller 95 % confidence intervals (i.e., smaller estimated standard deviations) for the coefficients of the effects of the first-stage intervention and the time of intervention. Because we wanted to illustrate the cost efficiency of the proposed time-varying SMART design and its ability to select the optimal embedded adaptive intervention, we implemented a rather simplified linear mixed-effects submodels (2)-(4) of the more general TVMEM in model (1). We showed that the joint model performs better than the TVMEM in analyzing the data collected from such time-varying SMART designs. The joint modeling approach extracts part of the information contained in the time of the response, which is a function of the first-stage treatment assignment. Also, the association between the longitudinal and event outcomes is accounted for by the random effect that underlies both the longitudinal and survival processes for each subject. Therefore, although complex, time-varying SMART designs may require more complicated models for time and an extra layer of joint modeling, and as such one would expect a better performance from joint modeling in general. Nevertheless, both modeling approaches performed well in estimating the other parameters and the mean of the final outcomes for each adaptive intervention embedded in the corresponding designs. Furthermore, equation (1) is a general form of TVMEM, and in our study is equivalent to equations (2) ~ (4) at time points t = 0, T 1i , T 2i for each subject i. T 1i is a subject-specific random variable, and coefficients in equation (3) can also be subject-specific. However, in practice, modeling coefficients to be subject-specific may lead to the estimation of too many parameters which, in some scenarios, may not be identifiable, particularly with small sample sizes. Therefore, as an initial attempt, we modeled T 1i as a subject-specific random variable and the coefficients as fixed parameters. For example, coefficients β 0(t), β 1(t), β 3(t) in equation (1) are fixed coefficients β 01, β 11, β 31 in equation (3), as model (1) is equivalent to submodel (3) at time point T 1i . More complicated models such as subject-specific and time-varying coefficients in submodels (2)-(4) can be considered, if the sample sizes are large. We also illustrated the effectiveness of the joint modeling approach in accurately estimating the parameters even when no specific model was assumed for the duration of the first-stage intervention, T 1i . The conclusions were qualitatively similar as that in the simulation where Weibull model was assumed for the duration of the first-stage intervention. In the scenarios we considered here, the time at which individuals were re-randomized was assessed only for responders to the first-stage intervention. However, one may also consider varying times for the non-responders and for the second-stage interventions. For example, a non-responder showing severe side effects or no trend towards achieving intermediate goals may be re-randomized sooner than t 10. The analytic approaches for such designs would be similar to the joint or time-varying mixed effects models proposed in this manuscript, for example, with an extra submodel for the duration of the second-stage interventions. Instead of randomization with certain pre-defined probabilities (e.g., in the first two simulation scenarios, randomization with probability 0.5 was used for both stages; in the last two scenarios, unequal randomization with probabilities 0.4(0.6) and 0.55(0.45) was used for the two stages, respectively), information concerning potential moderators could be used to tailor and assign the interventions. For example, the choice of the first-stage intervention options could depend on the severity of the subject's smoking habit at the beginning of the study; whereas the choice of the second-stage intervention option could depend on the subject's adherence to the first-stage intervention. The analysis of such a randomization scheme would require assigning weights for each subject [37]. We also compared the cost and power associated with selecting the optimal embedded adaptive intervention for the proposed time-varying SMART design versus that for the analogous standard SMART design. Our simulation results showed similar power for the two designs. We used a linear cost function to assess the cost efficiency of the proposed design and found that it can have substantially lower cost than the standard design. Several other forms of cost functions can be used to assess cost efficiency. However, as long as the cost is an increasing function of time, the proposed time-varying SMART design will have lower cost than the standard SMART design. Therefore, the time-varying SMART design can be used to study how the intensity and combination of two types of interventions might be adapted to a subject's progress in a cost- and time-efficient manner. In our study, we assume that there is no unmeasured confounder. As suggested by Chakraborty and Murphy [38], the assumption of "no unmeasured confounders" holds in a SMART design if the randomization probabilities of A1 at most depend on the baseline covariates, and the randomization probabilities of A2 at most depend on the baseline covariates, the intermediate outcome, and A1. We performed additional simulations to investigate the role of unmeasured confounders on the parameter estimations. From these simulations, we see that when the unmeasured confounders affect only T 1 and Y 1 , the parameter estimation is still accurate (Additional file 4: Table S4). However, when these unmeasured confounders affect Y 2 , there is bias in the estimation of T 2 (Additional file 4: Tables S5-S6). In the ADHD SMART study discussed by Nahum-Shani et al. [20], a weighted average was applied to the final outcomes when their primary goal of the study was to compare the imbedded adaptive intervention options in the SMART. In our Time-Varying SMART study, we used regression-based methods to identify more efficient adaptive decision rules for each subject along with their longitudinal outcomes. Similar to the analytic process of the standard SMART design by Q-learning in which a regression model for the outcome is postulated at each decision as a function of the patient's information to that point, our TVMEM in equation (1) is equivalent to submodels (2)-(4) at three time points of longitudinal outcomes for each individual. Therefore, we did not include weights in this study of the time-varying SMART design. However, for increased complexity of time-varying SMART designs, weights may be incorporated into the analysis in a future study to develop more robust estimations and results. The proposed time-varying two-stage SMART design can take into account the time associated with the first-stage interventions and thus could result in clinical trials with fewer side effects and lower cost. Additionally, the two modeling approaches we proposed are able to provide good estimations of the means of the final outcomes of all the embedded interventions. The joint modeling approach resulted in more accurate estimates and higher estimated coverage probabilities; therefore, we recommend using joint modeling to analyze data generated from the time-varying designs proposed in this manuscript. Almirall D, Nahum-Shani I, Sherwood NE, Murphy SA. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med. 2014;4:260–74. Dawson R, Lavori P.W. Placebo-free designs for evaluating new mental health treatments: the use of adaptive treatment strategies. Statistics in medicine. 2004;23:3249–3262. Murphy SA. An experimental design for the development of adaptive treatment strategies. Stat Med. 2005;24:1455–81. Lavori PW, Dawson R. Introduction to dynamic treatment strategies and sequential multiple assignment randomization. Clin Trials. 2014;11:393–9. Murphy SA, Collins LM, Rush AJ. Customizing treatment to the patient: adaptive treatment strategies. Drug Alcohol Depend. 2007;88 Suppl 2:S1–3. Lavori PW, Dawson R. Dynamic treatment regimes: practical design considerations. Clin Trials. 2004;1:9–20. Lavori PW, Dawson R, Rush AJ. Flexible treatment strategies in chronic disease: clinical and research implications. Biol Psychiatry. 2000;48:605–14. Heinrichs RW. Cognitive improvement in response to antipsychotic drugs: neurocognitive effects of antipsychotic medications in patients with chronic schizophrenia in the CATIE Trial. Arch Gen Psychiatry. 2007;64:631–2. Rush AJ, Trivedi M, Fava M. Depression, IV: STAR*D treatment trial for depression. Am J Psychiatry. 2003;160:237. Lavori PW, Rush AJ, Wisniewski SR, Alpert J, Fava M, Kupfer DJ, et al. Strengthening clinical effectiveness trials: equipoise-stratified randomization. Biol Psychiatry. 2001;50:792–801. Thall PF, Millikan RE, Sung HG. Evaluating multiple treatment courses in clinical trials. Stat Med. 2000;19:1011–28. Collins LM, Murphy SA, Bierman KA. A conceptual framework for adaptive preventive interventions. Prevention Science. 2004;5:181-192. Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, et al. Q-learning: a data analysis method for constructing adaptive interventions. Psychol Methods. 2012;17:478–94. Kasari C, Kaiser A, Goods K, Nietfeld J, Mathy P, Landa R, et al. Communication interventions for minimally verbal children with autism: a sequential multiple assignment randomized trial. J Am Acad Child Adolesc Psychiatry. 2014;53:635–46. Schulte PJ, Tsiatis AA, Laber EB, Davidian M. Q- and A-learning Methods for Estimating Optimal Dynamic Treatment Regimes. Stat Sci. 2014;29:640–61. Zhao YQ, Zeng D, Laber EB, Kosorok MR. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes. J Am Stat Assoc. 2015;110:583–98. Zhang B, Tsiatis AA, Laber EB, Davidian M. Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions. Biometrika. 2003;100:681–694. Lu X, Lynch KG, Oslin DW, Murphy S. Comparing treatment policies with assistance from the structural nested mean model. Biometrics. 2016;72(1):10-19. Lu X, Nahum-Shani I, Kasari C, Lynch KG, Oslin DW, Pelham WE et al. Comparing dynamic treatment regimes using repeated-measures outcomes: modeling considerations in SMART studies. Statistics in Medicine. 2016;35(10):1595-1615. Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, et al. Experimental design and primary data analysis methods for comparing adaptive interventions. Psychol Methods. 2012;17:457–77. Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A "SMART" design for building individualized treatment sequences. Annu Rev Clin Psychol. 2012;8:21–48. Pelham WE, Fabiano GA, Waxmonsky JG, Greiner AR, Gnagy EM, Pelham WE, et al. Treatment Sequencing for Childhood ADHD: A Multiple-Randomization Study of Adaptive Medication and Behavioral Interventions. Journal of Clinical Child and Adolescent Psychology. 45(4):396-415. doi:10.1080/15374416.2015.1105138. Page TF, Pelham Iii WE, Fabiano GA, Greiner AR, Gnagy EM, Hart KC, et al. Comparative Cost Analysis of Sequential, Adaptive, Behavioral, Pharmacological, and Combined Treatments for Childhood ADHD. Journal of Clinical Child and Adolescent Psychology: the Official Journal For the Society of Clinical Child and Adolescent Psychology. American Psychological Association, Division 53. 1-12. PMID 26808137. doi:10.1080/15374416.2015.1055859. Fagerstrom K, Hughes J. Varenicline in the treatment of tobacco dependence. Neuropsychiatr Dis Treat. 2008;4:353–63. Ebbert JO, Wyatt KD, Hays JT, Klee EW, Hurt RD. Varenicline for smoking cessation: efficacy, safety, and treatment recommendations. Patient Prefer Adherence. 2010;4:355–62. Cinciripini PM, Robinson JD, Karam-Hage M, Minnix JA, Lam C, Versace F, et al. Effects of varenicline and bupropion sustained-release use plus intensive smoking cessation counseling on prolonged abstinence from smoking and on depression, negative affect, and other symptoms of nicotine withdrawal. JAMA Psychiatry. 2013;70:522–33. Tan X, Shiyko MP, Li R, Li Y, Dierker L. A time-varying effect model for intensive longitudinal data. Psychol Methods. 2012;17:61–77. Shiyko MP, Lanza ST, Tan X, Li R, Shiffman S. Using the time-varying effect model (TVEM) to examine dynamic associations between negative affect and self confidence on smoking urges: differences between successful quitters and relapsers. Prev Sci. 2012;13:288–99. Henderson R, Diggle P, Dobson A. Joint modelling of longitudinal measurements and event time data. Biostatistics. 2000;1:465–80. Rizopoulos D. JM: An R Package for the Joint Modelling of Longitudinal and Time-to-Event Data. J Stat Softw. 2015;35:1–33. Collen EB, Mayer MN. Acute effects of radiation treatment: skin reactions. Can Vet J. 2006;47:931–5. Bower JE. Behavioral symptoms in patients with breast cancer and survivors. J Clin Oncol. 2008;26:768–77. Taunk NK, Haffty BG, Chen S, Khan AJ, Nelson C, Pierce D, et al. Comparison of radiation-induced fatigue across 3 different radiotherapeutic methods for early stage breast cancer. Cancer. 2011;117:4116–24. Hombrink J, Voss AC, Frohlich D, Glatzel M, Krauss A, Glaser FH. Therapy trends in the prevention of radiation-induced diarrhea after pelvic and abdominal irradiation. Results of a tricenter study. Strahlenther Onkol. 1995;171:49–53. Harris K, Doyle M, Barnes EA, Sinclair E, Danjoux C, Barbera L, et al. Diarrhea as a radiation side effect "welcomed" by patients taking opioids. J Pain Symptom Manage. 2006;31:97–8. Stacey R, Green JT. Radiation-induced small bowel disease: latest developments and clinical guidance. Ther Adv Chronic Dis. 2014;5:15–29. Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000;11:550–60. Chakraborty B, Murphy SA. Dynamic Treatment Regimes. Annu Rev Stat Appl. 2014;1:447–64. We wish to thank the two reviewers for constructive comments on earlier version of the manuscript. This work was supported in part by the National Institutes of Health (grants R01CA131324, R01DE022891, and R25DA026120 to S. Shete), and Cancer Prevention Research Institute of Texas grant RP130123 (SS). This research was supported, in part, by Barnhart Family Distinguished Professorship in Targeted Therapy (SS) and by the National Institutes of Health through Cancer Center Support Grant P30CA016672. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The data supporting the conclusions of this article are included within the article. Conception and design: TD, SS. Development of methodology: TD, SS. Simulation and methods implementation: TD. Writing, review, and/or revision of the manuscript: TD, SS. Study supervision: SS. Both authors read and approved the final manuscript. Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1400 Pressler Dr, FCT4.6002, Houston, TX, 77030, USA Tianjiao Dai & Sanjay Shete Department of Epidemiology, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA Sanjay Shete Tianjiao Dai Correspondence to Sanjay Shete. Embedded adaptive interventions in the SMART design of Figs. 1 and 2. Table S1. Eight embedded adaptive interventions in the SMART design of Fig. 1. Table S2. Four embedded adaptive interventions in the SMART design of Fig. 2, word document. (DOCX 14 kb) Conditional expectation of TVMEM, word document. (DOCX 44 kb) Data organization and Implementation. Table S3. Longitudinal data organization, word document. (DOCX 61 kb) Table S4. The effect sizes associated with U1 and U2 influencing T1 and Y1 but not Y2. Table S5. The effect sizes associated with U1 and U2 influencing T1, Y1 and Y2. Table S6. The effect sizes associated with U1 and U2 influencing T1 and Y2 but not Y1, word document. (DOCX 49 kb) Dai, T., Shete, S. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects. BMC Med Res Methodol 16, 112 (2016). https://doi.org/10.1186/s12874-016-0202-7 Received: 19 December 2015 Adaptive interventions Sequential multiple assignment randomized trial (SMART) Time-varying mixed effects model (TVMEM) Longitudinal model
CommonCrawl
The Path Of The Wildebeest Golf a program or function which gives the \$n^{\text{th}}\$ location of the wildebeest who starts at square \$1\$ on an infinite chessboard which is numbered in an anti-clockwise square spiral, where the wildebeest always visits the lowest numbered square she can reach that she has not yet visited. Inspiration: The Trapped Knight and OEIS A316667. Edit: This sequence is now on the OEIS as A323763. The code may produce the \$n^{\text{th}}\$ location, the first \$n\$ locations, or generate the sequence taking no input. Feel free to give her location after (or up to) \$n\$ leaps instead, but if so please state this clearly in your answer and make sure that an input of \$n=0\$ yields 1 (or [1] if appropriate). This is code-golf, so the aim is to produce working code in as few bytes as possible in your chosen language. Note: the wildebeest becomes trapped (much like the knight does at his \$2016^{\text{th}}\$ location, square \$2084\$, and the camel does at his \$3723^{\text{rd}}\$, square \$7081\$) at her \$12899744968^{\text{th}}\$ location on square \$12851850258\$. The behaviour of your code may be undefined for \$n\$ larger than this. (Thanks to Deadcode for the C++ code that found this!) The board looks like the below, and continues indefinitely: 101 100 99 98 97 96 95 94 93 92 91 102 65 64 63 62 61 60 59 58 57 90 105 68 39 18 5 4 3 12 29 54 87 A wildebeest is a "gnu" fairy chess piece - a non-standard chess piece which may move both as a knight (a \$(1,2)\$-leaper) and as a camel (a \$(1,3)\$-leaper). As such she could move to any of these locations from her starting location of \$1\$: . . . . . . . . . . . . . . . 35 . 33 . . . . . . 39 18 . . . 12 29 . . . . . . . (1) . . . . . The lowest of these is \$10\$ and she has not yet visited that square, so \$10\$ is the second term in the sequence. Next she could move from \$10\$ to any of these locations: . . . . . . 14 . 30 . . . . . . . . 3 . 29 . . . . . . 6 1 . . . 53 86 . . . . . . . (10) . . . . . . . 22 23 . . . 51 84 However, she has already visited square \$1\$ so her third location is square \$3\$, the lowest she has not yet visited. The first \$100\$ terms of the path of the wildebeest are: 1, 10, 3, 6, 9, 4, 7, 2, 5, 8, 11, 14, 18, 15, 12, 16, 19, 22, 41, 17, 33, 30, 34, 13, 27, 23, 20, 24, 44, 40, 21, 39, 36, 60, 31, 53, 26, 46, 25, 28, 32, 29, 51, 47, 75, 42, 45, 71, 74, 70, 38, 35, 59, 56, 86, 50, 78, 49, 52, 80, 83, 79, 115, 73, 107, 67, 64, 68, 37, 61, 93, 55, 58, 54, 84, 48, 76, 43, 69, 103, 63, 66, 62, 94, 57, 87, 125, 82, 118, 77, 113, 72, 106, 148, 65, 97, 137, 91, 129, 85 The first \$11\$ leaps are knight moves so the first \$12\$ terms coincide with A316667. code-golf sequence integer chess Jonathan AllanJonathan Allan \$\begingroup\$ Comments are not for extended discussion; this conversation has been moved to chat. \$\endgroup\$ – Mego Jan 29 '19 at 14:05 JavaScript (Node.js), 191 ... 166 164 bytes Saved 2 bytes thanks to @grimy. Returns the \$N\$th term. n=>(g=(x,y)=>n--?g(Buffer('QPNP1O?O@242Q3C3').map(m=c=>g[i=4*((x+=c%6-2)*x>(y+=c%7-2)*y?x:y)**2,i-=(x>y||-1)*(i**.5+x+y)]|i>m||(H=x,V=y,m=i))&&H,V,g[m]=1):m+1)(1,2) Try it online! or See a formatted version Spiral indices In order to convert the coordinates \$(x,y)\$ into the spiral index \$I\$, we first compute the layer \$L\$ with: $$L=\max(|x|,|y|)$$ $$\begin{array}{c|ccccccc} &-3&-2&-1&0&+1&+2&+3\\ \hline -3&3&3&3&3&3&3&3\\ -2&3&2&2&2&2&2&3\\ -1&3&2&1&1&1&2&3\\ 0&3&2&1&0&1&2&3\\ +1&3&2&1&1&1&2&3\\ +2&3&2&2&2&2&2&3\\ +3&3&3&3&3&3&3&3 \end{array}$$ We then compute the position \$P\$ in the layer with: $$P=\begin{cases} 2L+x+y&\text{if }x>y\\ -(2L+x+y)&\text{if }x\le y \end{cases}$$ $$\begin{array}{c|ccccccc} &-3&-2&-1&0&+1&+2&+3\\ \hline -3&0&1&2&3&4&5&6\\ -2&-1&0&1&2&3&4&7\\ -1&-2&-1&0&1&2&5&8\\ 0&-3&-2&-1&0&3&6&9\\ +1&-4&-3&-2&-3&-4&7&10\\ +2&-5&-4&-5&-6&-7&-8&11\\ +3&-6&-7&-8&-9&-10&-11&-12 \end{array}$$ The final index \$I\$ is given by: $$I=4L^2-P$$ NB: The above formula gives a 0-indexed spiral. In the JS code, we actually compute \$4L^2\$ right away with: i = 4 * (x * x > y * y ? x : y) ** 2 And then subtract \$P\$ with: i -= (x > y || -1) * (i ** 0.5 + x + y) Moves of the wildebeest Given the current position \$(x,y)\$, the 16 possible target squares of the wildebeest are tested in the following order: $$\begin{array}{c|cccccccc} &-3&-2&-1&x&+1&+2&+3\\ \hline -3&\cdot&\cdot&9&\cdot&11&\cdot&\cdot\\ -2&\cdot&\cdot&8&\cdot&10&\cdot&\cdot\\ -1&7&6&\cdot&\cdot&\cdot&12&13\\ y&\cdot&\cdot&\cdot&\bullet&\cdot&\cdot&\cdot\\ +1&5&4&\cdot&\cdot&\cdot&14&15\\ +2&\cdot&\cdot&2&\cdot&0&\cdot&\cdot\\ +3&\cdot&\cdot&3&\cdot&1&\cdot&\cdot \end{array}$$ We walk through them by applying 16 pairs of signed values \$(dx,dy)\$. Each pair is encoded as a single ASCII character. ID | char. | ASCII code | c%6-2 | c%7-2 | cumulated ----+-------+------------+-------+-------+----------- 0 | 'Q' | 81 | +1 | +2 | (+1,+2) 1 | 'P' | 80 | 0 | +1 | (+1,+3) 2 | 'N' | 78 | -2 | -1 | (-1,+2) 3 | 'P' | 80 | 0 | +1 | (-1,+3) 4 | '1' | 49 | -1 | -2 | (-2,+1) 5 | 'O' | 79 | -1 | 0 | (-3,+1) 6 | '?' | 63 | +1 | -2 | (-2,-1) 7 | 'O' | 79 | -1 | 0 | (-3,-1) 8 | '@' | 64 | +2 | -1 | (-1,-2) 9 | '2' | 50 | 0 | -1 | (-1,-3) 10 | '4' | 52 | +2 | +1 | (+1,-2) 11 | '2' | 50 | 0 | -1 | (+1,-3) 12 | 'Q' | 81 | +1 | +2 | (+2,-1) 13 | '3' | 51 | +1 | 0 | (+3,-1) 14 | 'C' | 67 | -1 | +2 | (+2,+1) 15 | '3' | 51 | +1 | 0 | (+3,+1) We keep track of the minimum encountered value in \$m\$ and of the coordinates of the corresponding cell in \$(H,V)\$. Once the best candidate has been found, we mark it as visited by setting a flag in the object \$g\$, which is also our main recursive function. On the first iteration, we start with \$x=1\$ and \$y=2\$. This ensures that the first selected cell is \$(0,0)\$ and that it's the first cell to be marked as visited. ArnauldArnauld \$\begingroup\$ So much golfing, can't wait for the rundown of how all the magic works! \$\endgroup\$ – Jonathan Allan Jan 27 '19 at 23:25 \$\begingroup\$ did you have to use Buffer to force each character to be interpreted as a single byte? \$\endgroup\$ – Jonah Jan 28 '19 at 1:49 \$\begingroup\$ @Jonah Although it's been deprecated, the Buffer constructor still accepts a string. So, yes, this is a rather cheap way to convert it to a list of bytes -- as opposed to [..."string"].map(c=>do_something_with(c.charCodeAt())). \$\endgroup\$ – Arnauld Jan 28 '19 at 10:39 \$\begingroup\$ -2 bytes on the coordinate encoding: TIO \$\endgroup\$ – Grimmy Jan 28 '19 at 10:45 \$\begingroup\$ @Grimy Nicely done! \$\endgroup\$ – Arnauld Jan 28 '19 at 11:03 Coconut, 337 276 bytes def g((x,y))= A=abs(abs(x)-abs(y))+abs(x)+abs(y) int(A**2+math.copysign(A+x-y,.5-x-y)+1) p=x,y=0,0;s={p};z=[2,3,1,1]*2 while 1:yield g(p);p=x,y=min(((a+x,b+y)for a,b in zip((1,1,2,-2,-1,-1,3,-3)*2,z+[-v for v in z])if(a+x,b+y)not in s),key=g);s.add(p) Returns a generator of values. Could probably be golfed more. (Especially the sequence of difference tuples.) Spiral algorithm taken from this math.se answer. Solomon UckoSolomon Ucko \$\begingroup\$ for a,b in ( -> for a,b in( (maybe you can golf the delta tuple of tuples itself too) \$\endgroup\$ – Jonathan Allan Jan 27 '19 at 22:21 \$\begingroup\$ No need for q and a zip is shorter for the tuples: 306 bytes may still be golfable of course \$\endgroup\$ – Jonathan Allan Jan 27 '19 at 22:36 \$\begingroup\$ ...how about this for 284? EDIT... this for 278 \$\endgroup\$ – Jonathan Allan Jan 27 '19 at 22:45 \$\begingroup\$ FWIW, that math.se answer has x and y swapped and both negative relative to the coordinate system in this challenge (where positive x is right and y is up). Not that it'd make any difference due to the symmetries, but still. \$\endgroup\$ – Deadcode Jan 27 '19 at 22:47 \$\begingroup\$ 0.5->.5 for another byte save; A**2->A*A for one more. \$\endgroup\$ – Jonathan Allan Jan 27 '19 at 23:18 05AB1E, 77 65 58 57 52 bytes Xˆ0UF3D(Ÿ0KãʒÄ1¢}εX+}Dε·nàDtyÆ+yO·<.±*->}D¯KßDˆkèU}¯ -6 bytes thanks to @Arnauld by using a port of his formula. Outputs the first \$n+1\$ values as a list (of decimals). Try it online (the ï in the footer removes the .0 to make the output more compact, but feel free to remove it to see the actual result). Code explanation: Xˆ # Put integer 1 in the global_array (global_array is empty by default) 0U # Set variable `X` to 0 (`X` is 1 by default) F # Loop the (implicit) input amount of times: 3D(Ÿ # Push the list in the range [-3,3]: [-3,-2,-1,0,1,2,3] 0K # Remove the 0: [-3,-2,-1,1,2,3] ã # Cartesian product with itself, creating each possible pair: [[3,3],[3,2],[3,1],[3,-1],[3,-2],[3,-3],[2,3],[2,2],[2,1],[2,-1],[2,-2],[2,-3],[1,3],[1,2],[1,1],[1,-1],[1,-2],[1,-3],[-1,3],[-1,2],[-1,1],[-1,-1],[-1,-2],[-1,-3],[-2,3],[-2,2],[-2,1],[-2,-1],[-2,-2],[-2,-3],[-3,3],[-3,2],[-3,1],[-3,-1],[-3,-2],[-3,-3]] ʒ } # Filter this list of pairs by: Ä # Where the absolute values of the pair 1¢ # Contains exactly one 1 # (We now have the following pairs left: [[3,1],[3,-1],[2,1],[2,-1],[1,3],[1,2],[1,-2],[1,-3],[-1,3],[-1,2],[-1,-2],[-1,-3],[-2,1],[-2,-1],[-3,1],[-3,-1]]) εX+} # Add the variable `X` (previous coordinate) to each item in the list D # Duplicate this list of coordinates ε # Map each `x,y`-coordinate to: · # Double both the `x` and `y` in the coordinate n # Then take the square of each à # And then pop and push the maximum of the two Dt # Duplicate this maximum, and take its square-root yÆ # Calculate `x-y` + # And add it to the square-root yO # Calculate `x+y` · # Double it < # Decrease it by 1 .± # And pop and push its signum (-1 if < 0; 0 if 0; 1 if > 0) * # Multiply these two together - # And subtract it from the duplicated maximum > # And finally increase it by 1 to make it 1-based instead of 0-based }D # After the map: Duplicate that list with values ¯K # Remove all values that are already present in the global_array ß # Pop the list of (remaining) values and push the minimum Dˆ # Duplicate this minimum, and pop and add the copy to the global_array k # Then get its index in the complete list of values è # And use that index to get the corresponding coordinate U # Pop and store this coordinate in variable `X` for the next iteration }¯ # After the outer loop: push the global_array (which is output implicitly) General explanation: We hold all results (and therefore values we've already encountered) in the global_array, which is initially started as [1]. We hold the current \$x,y\$-coordinate in variable X, which is initially [0,0]. The list of coordinates we can reach based on the current \$x,y\$-coordinate are: [[x+3,y+1], [x+3,y-1], [x+2,y+1], [x+2,y-1], [x+1,y+3], [x+1,y+2], [x+1,y-2], [x+1,y-3], [x-1,y+3], [x-1,y+2], [x-1,y-2], [x-1,y-3], [x-2,y+1], [x-2,y-1], [x-3,y+1], [x-3,y-1]] The list I mention in the code explanation above holds these values we can jump to, after which the current \$x,y\$ (stored in variable X) is added. Then it will calculate the spiral values based on these \$x,y\$-coordinates. It does this by using the following formula for a given \$x,y\$-coordinate: $${T = max((2 * x) ^ 2, (2 * y) ^ 2)}$$ $${R = T - (x - y + √T) * signum((x + y) * 2 - 1) + 1}$$ Which is the same formula @Arnauld is using in his answer, but written differently to make use of 05AB1E's builtins for double, square, -1, +1, etc. (If you want to see just this spiral part of the code in action: Try it online.) After we've got all the values we can reach for the given \$x,y\$-coordinate, we remove all values that are already present in the global_array, and we then get the minimum of the (remaining) values. This minimum is then added to the global_array, and variable X is replaced with the \$x,y\$-coordinate of this minimum. After we've looped the input amount of times, the program will output this global_array as result. Kevin CruijssenKevin Cruijssen \$\begingroup\$ FWIW, here is a port of my own formula to convert the coordinates into spiral indices. It's 5 bytes shorter but yields floats. (I don't know if this is a problem or not.) \$\endgroup\$ – Arnauld Jan 28 '19 at 16:58 \$\begingroup\$ (Note that \$y\$ in your code is \$-y\$ in mine.) \$\endgroup\$ – Arnauld Jan 28 '19 at 17:07 \$\begingroup\$ @Arnauld Thanks, that saves 5 additional bytes. :) EDIT: Which you already mentioned in your first comment. ;p \$\endgroup\$ – Kevin Cruijssen Jan 28 '19 at 17:36 Not the answer you're looking for? Browse other questions tagged code-golf sequence integer chess or ask your own question. Return Spiral Indexes! Trapped Knight Sequence Tic-Tac-Toe and Chess, with fewest [distinct] characters Fairy chess "leaper" movement patterns Convert a pawn promotion from algebraic to ICCF numeric notation Chess conversion Escape a chessboard Is it a valid chess move? A knight's graph on an N-by-N board
CommonCrawl
Modeling the impact of high temperatures on microalgal viability and photosynthetic activity Quentin Béchet1,2, Martin Laviale1,2, Nicolas Arsapin1,2, Hubert Bonnefond1,2 & Olivier Bernard1,2 Culture collapse due to high temperatures can significantly impact the profitability of outdoor algal cultivation systems. The objective of this study was to model for the first time the impact of high temperatures on algal activity and viability. Viability measurements on Dunaliella salina cultures were based on cytometry with two fluorescent markers (erythrosine and fluorescein di-acetate), and photosynthetic activity was measured by Pulse Amplitude Modulation (PAM) fluorometry. Kinetic studies revealed that viability and activity losses during exposure to high temperatures could be described by a Weibull model. Both mortality and activity were shown to be functions of the thermal dose received by the algae, defined as the product of duration of exposure to high temperatures and an exponential function of temperature. Simulations at five climatic locations revealed that culture collapse due to high temperatures could impact productivity of D. salina in non-temperature-controlled outdoor photobioreactors by 35 and 40% in arid and Mediterranean climates, respectively. The model developed in this study can be used to forecast the impact of high temperatures on algal biofuel productivity. When coupled with models predicting the temperature of outdoor cultivation systems, this model can also be used to select the best combination of location, system geometry, and algal species to minimize the risks of culture collapse and therefore maximize biofuel productivity. Massive investments were done on microalgae industry in the last decades, mainly due to their capacity to synthesize lipids for biofuel production [1]. The economic feasibility of this new biotechnology at full-scale has been the object of a large number of studies [2,3,4] but remains difficult to accurately evaluate, mainly because of uncertainties on the actual algal productivity that can be reached at full-scale (i.e., biomass produced per unit time per square meter of installation). Mathematical models have been developed to predict and optimize the biofuel production potential of microalgae as a function of local climate (light intensity, temperature, etc.) and process operation (retention time, nutrients concentration, etc.) [5,6,7]. Regarding the impact of temperature, existing models are able to accurately estimate productivity when temperature is within a range of values enabling algal growth [8]. However, temperatures of typical cultivation systems (i.e., photobioreactors, open ponds) can exceed these temperatures. For example, Torzillo et al. [9] observed that the temperature in a photobioreactor located in Florence, Italy, reached levels higher than 40 °C for several hours per day in summer. Tredici and Materassi [10] even observed temperatures as high as 56 °C in vertical alveolar panels that caused the collapse of the thermotolerant Spirulina sp. In these conditions, heat stress impacts structure and activity of proteins and membrane fluidity, which disturbs metabolic processes and leads to retardation in growth [11, 12]. Cooling the system is then necessary to avoid culture collapses but strongly increases operation costs at full-scale [13]. As culture collapses would have a dramatic impact on the system profitability and environmental footprint, models predicting the impact of heat stress on algal productivity are needed to accurately assess full-scale biofuel production [14]. Several studies aimed to understand the impact of heat stress on microalgae [15], and especially on microalgae symbiotic with coral [16] and microphytobenthos [17,18,19,20]. For example, Vieira et al. [20] showed that the photosynthetic activity of two microphytobenthos communities significantly decreased during continuous exposure to a temperature of 42 °C. The work of Serra-Maia et al. [21] also highlighted the impact of temperature on cell mortality in photobioreactors exposed to high temperatures for several days. However, to the best of our knowledge, no previous study systematically measured the evolution of photosynthetic activity (i.e., the rate of electron transfer in algal photosystems) during heat stress over short time-scales (from minutes to hours) for various temperatures. In addition, the rate at which algae die when exposed to high temperatures was, to the best of our knowledge, never measured. From what temperature do algae start to die? How long can algae survive when exposed to lethal temperatures? To answer these questions, the objective of this study was to develop a model for predicting algal photosynthetic activity and viability (i.e., the fraction of living cells in the culture) when algae are exposed to high temperatures. For this purpose, the effect of short-term (<3 h) heat exposure (between 41 and 60 °C) on the photosynthetic activity and viability of the commercial species Dunaliella salina were studied. D. salina is a species which has been studied for its potential to produce both carotenoids and triacylglycerols (TAG) which can be turned into biofuel [22]. To quantify the impact of high temperature on productivity at full-scale, the resulting viability and activity models were coupled with a model predicting temperature fluctuations in outdoor photobioreactors at various climatic locations. Review on existing mortality models To the best of our knowledge, the short-term impact of high temperatures on the viability and photosynthetic activity of microalgae has not been previously modeled. However, multiple models exist to predict the mortality rate of bacteria under various stresses: high/low pH stress [23], high temperatures [24], high pressure [25], etc. This section reviews the mortality models developed for bacteria with the objective to select the most relevant model to describe the impact of high temperatures on microalgae. Several formulas were used in the literature to describe the survival rate of bacteria exposed to heat stress (see the reviews [26,27,28]). One of the most traditional approaches is based on a first-order model, assuming a constant mortality rate m (s−1), expressed as follows: $$N(t) = N_{0} \exp ( - mt),$$ where N is the number of viable cells at the time t (s) and N 0 the number of viable cells at t = 0 s. This model has been largely criticized in the literature due to its inability to represent experimental data, and especially the initial lag-phase usually observed at the start of mortality events [28,29,30,31], which was also observed in this study (see "Results" section). The model presented by Geerard et al. [27] aimed to better represent this initial lag-phase and was expressed as a set of two differential equations: $$\frac{{{\text{d}}N}}{{{\text{d}}t}} = - m_{\text{m}} N\left( {\frac{1}{{1 + C_{\text{c}} }}} \right)\left( {1 - \frac{{N_{\text{res}} }}{N}} \right)$$ $$\frac{{{\text{d}}C_{c} }}{{{\text{d}}t}} = - m_{\text{m}} C_{\text{c}},$$ where C c is a variable representing the "physiological state" of cells, m m is the maximal decay rate (s−1), and N res is a model parameter. A simpler model, the 'log-logistic model' proposed by Cole et al. [32], was used in numerous studies to represent the impact of various stresses on bacterial viability [25, 30, 33,34,35]: $$N(t) = N_{0} \exp \left( {\alpha + \frac{\omega - \alpha }{{1 + \exp \left( {\frac{4\sigma (\tau - \log (t))}{\omega - \alpha }} \right)}}} \right),$$ where α and ω are, respectively, the viable cell counts at the start and at the end of the mortality event (in log values), σ is a shape factor, and τ is a scale factor. This model was in agreement with experimental data but was specifically designed to represent the case where the final cell count is different from 0 (for example, in the case of bacterial resistance to stress). As described in the "Results" section, no algae survived after heat treatment in our kinetic studies and this model was therefore not adapted. Finally, the other commonly used model in the literature is the Weibull model, described as $$N(t) = N_{0} \,\exp \left( { - \left( {\frac{t}{\lambda }} \right)^{n} } \right),$$ where λ is the half-life parameter (s) and n is the shape factor (λ is the time of exposure necessary to kill 63% of the population; low n values indicate a sharp decrease of the viability over time). Van Boekel [29] reported 55 studies that successfully used the Weibull model to represent viability loss during heat treatment of various bacteria. This model is able to represent the initial lag-phase at the start of heat treatment, has a limited number of parameters, and is practical to use. Based on this literature review, the Weibull model was selected to represent the decrease of activity and viability of algae during exposure to high temperatures. Algal species and cultivation conditions The commercial species D. salina (CCAP 18/19) was cultivated in f/2-enriched seawater medium [36]. An axenic culture (volume 300 mL) was maintained at 27 ± 1 °C under continuous light (300 μmol m−2 s−1). Homogenous mixing of the culture was ensured by bubbling filtered air (PTFE filters 0.2 µm, Midisart 2000, Sartorius) combined with gentle magnetic stirring. Bubbling also removed excessive oxygen and supplied inorganic carbon. The culture was operated in a semi-continuous mode by replacing twice a day a fraction of the culture by freshly prepared medium, thereby maintaining the algal concentration at approximately 7 × 105 cells mL−1. Kinetic studies of algal activity and viability at high temperatures The algal cell density was determined using a particle counter (HIAC-Royco; Pacific Scientific Instruments). Variability between triplicate measurements was routinely less than 5%. The algal culture was then diluted with 0.2 µm-filtered f/2 medium to reach a concentration of around 104 cells mL−1, which was found optimal for both activity and viability measurements. Several 1 mL aliquots of this diluted culture was placed in 1.5 mL centrifuge tubes (Safe-Lock tubes; Eppendorf AG, Germany) and immersed in a water bath preheated at the desired temperature (41, 42, 43, 45, 50 or 60 °C). The thermal inertia of the samples was very low due to their reduced volume, and thus the desired temperature was very rapidly reached. The time necessary for cooling was however in the same order of magnitude than the time of exposure to 60 °C (maximum of 3 min), which may have impacted the accuracy of measurements as discussed in the "Results" section. While this sudden temperature drop may have impacted cell activity and viability, this was necessary to control the exposure time to temperature during experiments. To avoid potential issues resulting from the reduced sample volume, algae were highly diluted and kept in the dark when exposed to heat, which avoided inhibition or limitation by oxygen, inorganic carbon, or nutrients during the incubation time. At least six different exposure times were tested for each temperature (from 30 s to 3 h depending on the tested temperature) to estimate the evolution of viability and activity during the course of high-temperature exposure. Inhibition by excessive oxygen concentration or growth limitation by limiting carbon supply was therefore unlikely during heat stress. Sedimentation was not observed in viable samples, mostly because D. salina cells are motile. It is therefore unlikely that experimental conditions significantly impacted viability or activity of algal cells. At the end of each exposure time, tubes were immediately placed in a colder water bath (20 °C), still in dark conditions. Viability measurements through cytometry were performed 1 and 6 h after the end of heat exposure as described below (for example, for the kinetics study at 41 °C, tubes were first exposed to a temperature of 41 °C for up to 3 h, and then viability was measured 1 and 6 h after the end of heating). As for the estimation of photosynthetic activity, the tubes were first cooled down to 20 °C before being transferred to a 27 °C water bath (i.e., the temperature used for cultivating the algae) and in dim light until PAM analysis was performed within 1 h after the heat exposure. Assessment of microalgal viability by flow cytometry Various experimental techniques have been proposed in the literature and a definition of a "viable cell" depends on the technique used. For example, the "viable cell count" measures the fraction of cells able to produce a single colony on an Agar plate, while some staining techniques rely on the ability of dead/living cells to absorb a certain dye [37]. In this study, two fluorescent markers were used to measure viability by cytometry (BD Accuri™ C6 Plus): erythrosine (Erythrosin extra-bluish, CAS: 16423-68-0, Sigma-Aldrich, USA), which stains algal cells with a porous cell membrane (i.e., dead cells [38]), and fluorescein di-acetate (FDA, CAS: 596-09-8, Sigma-Aldrich, USA), which stains algal cells having enzymatic activity (i.e., living cells [39]). The fluorescent markers were added to the algal samples by adding a small volume of concentrated markers (erythrosine: 20 μL mL−1 culture at 1 g L−1 sea water filtered at 0.2 µm; FDA: 3 μL mL−1 culture at 10 mg mL−1 acetone; marker concentrations were optimized to ensure that viable and non-viable cells could be clearly identified; data not shown). For measurements with erythrosine, samples were exposed for 60 min to the marker in the dark. For measurements with FDA, samples were kept for 20 min at room temperature (20 °C) and in the ambient light (measured within 20–50 μmol m−2 s−1) to stimulate enzymatic cell activity of living cells. Cells fluorescence was measured 1 and 6 h after heat exposure by cytometry on three different fluorometry channels. Chlorophyll a fluorescence was measured by using the "FL3" channel (Excitation/Emission: 488/>670 nm) and enabled distinguishing algae from bacteria and/or other non-photosynthetic particles. "FL2" (Excitation/Emission: 488/585 nm) and "FL1" (Excitation/Emission: 488/530 nm) were used to detect the fluorescence of erythrosine and FDA, respectively. The fractions of living and dead cells were determined from erythrosine/FDA fluorescence vs. Chlorophyll fluorescence plots (see Additional file 1: S1 for an illustration). This viability measurement protocol was validated on samples with known ratios of viable and non-viable cells. For this purpose, a sample from the culture of D. salina was heated at 45 °C for 1 h in order to kill all microalgae (i.e., 0% viability). Known volumes of this sample were mixed with another non-killed sample (i.e., close to 100% viability) in order to obtain the following theoretical fractions of killed algal cells: 0, 25, 50, 75, and 100%. The viability of each of these samples was then measured for different incubation times with erythrosine and FDA (Fig. 1). In practice, the total cell concentration in the heated solution was lower than in the non-heated solution due to cell degradation during heating (microscopic observations; data not shown). The ratios of killed and non-killed cells in each sample were therefore re-calculated by determining from cell counts the number of cells degraded during heating. The significant linear correlations shown in Fig. 1 (R 2 = 0.99, N = 15 for erythrosine and R 2 = 0.98, N = 25 for FDA) for the two viability markers indicate that the technique developed in this study enabled accurate measurements of D. salina viability. Viability of D. salina cultures for different killed and non-killed ratios for erythrosine (a) and FDA (b) and for different incubation times (Erythrosine: crosses 15 min; diamonds 2 h; circles 3 h; FDA: crosses 6 min; diamonds 21 min; circles 36 min; stars 51 min; 'plus' signs 67 min). Error bars represent 95% confidence intervals Assessment of microalgal photosynthetic activity by in vivo chlorophyll fluorescence analysis Chlorophyll fluorescence was measured by Pulse Amplitude Modulation (PAM) fluorometry, which has been shown to be a useful technique for assessing the effect of temperature on photosynthetic activity [12, 15, 20]. The fluorescence signal was measured with a Multi-Color PAM (Heinz Walz GmbH, Germany) equipped with a temperature-controlled block for cuvette set at 27 °C ± 1 °C, a RG665 long-pass filter on the fluorescence detector, a blue LED (440 nm) as source of actinic light, and a white LED used as a light source for saturating pulses [40]. For each tube exposed to heat, a 500 µL-aliquot was transferred into a quartz cuvette (QS-10, Hellma Analytics) before being diluted with fresh medium (total volume of the cuvette: 1.25 mL) to avoid cell mutual shading during PAM measurement. A so-called "rapid light curve" (RLC) protocol was then applied [41]. For this purpose, each sample was exposed for 5 min to a light intensity of 22 µmol m−2 s−1, which corresponds to the first light step of the RLC, to ensure that all samples could experience the same short-term light history, i.e., that photosystems were activated and that the chlorophyll fluorescence signal reached steady-state [42]. The sample was then exposed to 7 successive 10 s steps of increasing actinic light: 22, 79, 218, 467, 812, 1336, and 1890 µmol m−2 s−1. A saturating light pulse was applied at the end of each step and the instantaneous and maximum light-acclimated fluorescence levels (F and F M ', respectively) were measured. Thus the effective quantum yield of photosystem II (ΦPSII) was calculated for each light step according to Genty et al. [43]: $$\varPhi {\rm{PSII}} = \frac{{F_{\text{M}}^{\prime} - F}}{{F_{\text{M}}^{\prime }}}$$ For each sample, a ΦPSII-I curve (i.e., rapid light curve [41]) was thus obtained, where I is the instantaneous photosynthetically active radiation (PAR: 400–700 nm, in µmol m−2 s−1), which was previously measured inside the cuvette with a spherical micro quantum sensor (US-SQS/L, Walz). According to Silsbe and Kromkamp [44], each ΦPSII-I curve was then fitted to the model of Eilers and Peeters [45]. The 'Eilers and Peeters' model, when reparametrized as suggested by [8], can be characterized by the following parameters: the photosynthetic efficiency at low light intensity (i.e., α, the initial slope of the curve), the maximal rate of photosynthesis (i.e., the maximal value of the curve plateau), and the light intensity threshold between photosaturation and photoinhibition [46]. All photosynthetic parameters varied significantly with heat exposure (see Additional file 1: S2 for details). Among them, the initial slope of the curve α was chosen as the best indicator of the algal photosynthetic activity as it was estimated with the highest level of confidence (see Additional file 1: S2 for an illustration). Model calibration and statistical analysis Each kinetic study (i.e., the evolution of photosynthetic activity and viability during heating) was used to fit the Weibull model by least-square regression (Matlab function lsqcurvefit). Viability and photosynthetic activity were measured in duplicates for each time of exposure. Confidence intervals on measured algal viabilities and activities were based on a statistical analysis of the differences between duplicate values. The uncertainty on model parameters was then estimated through Monte Carlo simulations based on these confidence intervals, as detailed in Additional file 1: S3. Prediction of the impact of high temperature on full-scale cultivation Simulations were performed to determine the impact of high temperatures in outdoor photobioreactors at various climatic locations, based on the model of viability and activity developed in this study. Simulations were performed at five climatic locations representing arid, Mediterranean, subtropical, tropical, and temperate climates as described in [13]. The temperature prediction was coupled to the models of algal activity and viability described in the "Results" section. Unless otherwise stated, it was assumed that the photobioreactor was re-inoculated with fully viable algae at sunrise the day following a culture collapse. The number of culture collapses to high temperature was defined as the number of days when viability at the end of the day was lower than 1%. Impact of high temperatures on D. salina viability Figure 2 shows that the viability of D. salina did not significantly decrease for temperatures lower than 43 °C (values below 100% viability in Fig. 2 for 41 and 42 °C are most likely due to experimental uncertainty). This confirms that cell viability was not impacted by the test experimental conditions (i.e., tubes were kept in the dark and without agitation). The protocol used in this study therefore enables studying the impact of temperature stress only while allowing for low thermal inertia, on the contrary to previous protocols described in the literature (e.g., Serra-Maia et al. [21]). Above 43 °C, the rate of viability loss increased with temperature, which is consistent with previous results reported in literature for bacteria [29]. This increase can be explained by the fact that algal death is most likely due to degradation of key enzymes and membrane denaturation, the rate of which follows an Arrhenius function of temperature [29]. Interestingly, viability measured with erythrosine (Fig. 2a–c) decreased slightly faster with the time of heat exposure than the viability measured with the FDA marker (Fig. 2b–d). Erythrosine is adsorbed by dead cells due to membrane permeability [47], whereas FDA is absorbed by cells and then hydrolyzed by enzymes of viable cells, thus resulting in the production of a fluorescent compound. The differences in the rates of viability loss between erythrosine and FDA therefore indicate that algal cell membrane became permeable slightly before enzymatic activity stopped. In addition, Fig. 2 shows that viability measured 6 h after exposure to high temperatures was significantly lower than viability 1 h after heat exposure. This decrease is unlikely due to slow penetration of erythrosine in dead cells as Béchet et al. [47] showed that only a few minutes were necessary for erythrosine to enter dead algal cells. This decrease therefore suggests that algae continued dying after exposure to high temperatures. Algal death due to high temperatures can therefore be considered as a two-step process: a fast decrease of viability during heat exposure followed by a slower decrease after heat exposure. Evolution of D. salina viability with time of exposure to high temperatures (Marker/Incubation time: a erythrosine/1 h; b FDA/1 h; c erythrosine/6 h; d FDA/6 h; crosses experimental data; line Weibull model; Thin blue line T = 41 °C; Thin red dash line T = 42 °C; Thin black point line T = 43 °C; Thick blue line T = 45 °C; Thick red dash line T = 50 °C; Thick black point line T = 60 °C). Error bars represent 95% confidence intervals. The 100% viability measured in the tubes non-exposed to heat (t = 0) shows that experimental conditions (no agitation, dark conditions) did not impact algal viability over the duration of kinetic studies The measured rates of viability loss during heat exposure are consistent with previous observations from the literature: firstly, Fig. 2 shows that the Weibull model was able to represent the evolution of algal viability with the time of heat exposure, which was observed for diverse microorganisms (see the "Background" section for details; see Additional file 1: S4 for models comparison). Secondly, Fig. 3 shows that the half-life parameter λ (Eq. 5) followed an exponential function of temperature, which was reported by most of the 55 studies reviewed by van Boekel [29]: Evolution of Weibull λ parameter (Eq. 5) with temperature when erythrosine (a) and FDA (b) were used to measure D. salina viability 1 h after heat exposure (see Table 1 for parameters values). Results obtained for measurements performed 6 h after heat exposure are shown in Additional file 1: S5. Error bars represent 95% confidence intervals $$\lambda = \exp (a_{\text{V}} (T - T_{{0,{\text{V}}}} )),$$ where a v is the exponential coefficient (°C−1). Thirdly, this coefficient a V (Eq. 7) determined for D. salina was between −0.218 and −0.222 °C−1 (Table 1), which is within the range reported by van Boekel [29] for diverse microorganisms (−0.05 to −0.37). Finally, the shape factor 'n' did not follow a clear evolution with temperature (data not shown), which suggests that variations of n are likely due to experimental errors. This observation is also consistent with the findings of van Boekel [29] who showed that the shape parameter was independent of temperature for the large majority of the microorganisms tested. Based on these similarities, mechanisms responsible for algal and bacterial deaths due to exposure to high temperatures are likely to be similar. Following these observations, the viability model was expressed as follows: Table 1 Parameters values for the Weibull model (values in parenthesis are 95% confidence intervals; model parameters shown in this table were obtained with erythrosine when viability was measured 1 h after heat exposure; see Additional file 1: S5 for parameters obtained 6 h after heat exposure)—see the "Results" section for parameters definition $$V(t) = V_{0} \exp \left( { - \left( {\frac{t}{{\exp (a_{\text{V}} (T - T_{{0,{\text{V}}}} )}}} \right)^{{n_{\text{V}} }} } \right),$$ where V is the viability; V 0 the initial viability; T 0,V (°C) and a V (°C−1) are obtained from log-linear regressions as shown in Fig. 3; and n V is obtained as the average of experimental measurements of the shape parameter at different temperatures (Table 1). Interestingly, Eq. 8 suggests that viability is a function of the 'dose' of heat, or 'thermal dose' (d V, s), which can be defined as: $$d_{V} = t \exp ( - a_{\text{V}} (T - T_{{0,{\text{V}}}} )),$$ where t (s) is the duration of exposure to the temperature T (°C). The concept of 'thermal dose' has already been used in the medical literature to represent the impact of high temperatures on the viability of tissue cells and was expressed using the same mathematical equation [48]. This type of expression suggests that viability of algal samples exposed to changing temperatures can be determined by integrating the dose d V over the period of time considered: $$V(t) = V_{0} \exp ( - (d_{\text{V}} )^{{n_{\text{V}} }} ).$$ Figure 4 shows that the Weibull model associated with the concept of thermal dose (Eq. 10) successfully predicted the evolution of algal viability with time for all temperatures tested. Evolution of the viability with the thermal dose as defined by Eq. 9 (Crosses/plain line measurements/prediction with erythrosine; Circles/dash-line measurements/prediction with FDA)—results obtained for measurements performed 6 h after heat exposure are shown in Additional file 1: S5. Error bars represent 95% confidence intervals Impact of high temperatures on D. salina photosynthetic activity Similarly to what was observed for D. salina viability, the Weibull model was able to represent the evolution of algal photosynthetic activity with duration of heat exposure (see Additional file 1: S6). Photosynthetic activity decreased faster than viability, which indicates that photosynthesis logically stopped before algal death. Consequently, accounting for the impact of high temperatures on viability only may lead to overestimate productivity as cells stop photosynthesis before dying. Moreover, the half-life parameter λ for photosynthetic activity (Eq. 5) was shown to increase exponentially with temperature (Additional file 1: S6) and the shape factor n was not clearly correlated to temperature (data not shown). Following the same approach than for algal viability, the following equations were used to represent the activity drop during heat exposure: $${A}(t) = {A}_{0} \exp ( - (d_{A} )^{{n_{A} }} )$$ $$d_{A} = \exp (a_{A} (T - T_{{0,{A}}} ),$$ where A is the photosynthetic activity; A 0 the initial photosynthetic activity; d A the thermal dose for photosynthetic activity (s); and a A (s−1) and T 0,A and n A were defined similarly for the viability model (Eq. 8). The poor fit shown in Fig. 5 can be explained by the uncertainty on the temperature in test tubes (due to tubes thermal inertia) during exposure to high temperatures at these short time-scales (minutes). Evolution of the measured (crosses) and predicted (plain line) photosynthetic activity (characterized here by the slope of rapid light curves at low light intensities) with the thermal dose. Experimental data include the triplicates at 45 °C. Error bars represent 95% confidence intervals Accounting for viability loss in algal growth models Equations 9, 10, 11, and 12 can be used to express algal viability and photosynthetic activity when algae are exposed to constant temperatures. The objective of this section is to propose a modeling strategy to express viability and photosynthetic activity when algae are exposed to fluctuating temperatures, for example, in outdoor cultivation systems. The modeling approach proposed here consists on the following steps: The temperature profile experienced by the algal culture should be first determined over the entire cultivation period to identify mortality events (i.e., the time periods at which temperature is above the lethal value; e.g., 43 °C for D. salina). For each mortality event, the fractions of viable and active cells (f V and f A, respectively), at the end of the event should be determined by using the following equations: $$f_{\text{V}} = \exp ( - d_{\text{V}}^{{n_{\text{V}} }} ),$$ $$f_{\text{A}} = \exp ( - d_{\text{A}}^{{n_{\text{A}} }} ),$$ where n V and n A are given in Table 1 and d V and d A are the thermal doses defined as follows: $$d_{\text{V}} = \mathop \int \limits_{{t_{\text{s}} }}^{{t_{\text{e}} }} { \exp }( - a_{\text{V}} (T(t^{\prime}) - T_{{0,{\text{V}}}} ) \cdot {\text{d}}t^{\prime},$$ $$d_{\text{A}} = \mathop \int \limits_{{t_{\text{s}} }}^{{t_{\text{e}} }} \exp ( - a_{\text{A}} (T(t^{\prime}) - T_{{0,{\text{A}}}} ) \cdot {\text{d}}t^{\prime},$$ where t s and t e are the starting and end times of the mortality event, respectively (s); T is the time-dependent culture temperature (°C); and T 0,V and T 0,A (°C) are the parameters given in Table 1. The active and viable algal biomass at the end of the mortality event can then be computed by multiplying the active and viable biomass concentrations before the mortality event by the two fractions f A and\(f_{{{\text{V}}}}\). When temperature is below the lethal level (43 °C for D. salina), the evolution of biomass concentration in the system can be predicted by using classical models, generally expressed by the following differential equation [5,6,7]: $$\frac{{{\text{d}}N}}{{{\text{d}}t}} = \phi (T)\mu (f_{\text{i}} )N,$$ where N is the algal cell concentration (m−3); Φ is a function representing the impact of temperature on the algal growth rate; μ is the specific growth rate (s−1); and f i represents the factors other than temperature impacting algal growth rate (light intensity, pH, nutrient concentrations, etc.). An important assumption behind this modeling approach is that non-active algae (i.e., algae which activity dropped to 0 after heat exposure) cannot recover after heat exposure. This assumption may, however, not be valid for some algal species such as some benthic algae which were observed to recover after an exposure to 50 °C [19]. Accounting for this recovery process could therefore refine the modeling approach, even if out of the scope of this study. The impact of this assumption on productivity predictions in outdoor cultivation systems is discussed in the following section. Impact at full-scale To demonstrate the impact of high temperatures on full-scale algal cultivation systems for biofuel production, the following simulations were performed. The temperature profile in tubular vertical outdoor photobioreactors (radius 0.095 m; height 1.8 m) was predicted by the validated model of Béchet et al. [49]. This model is based on a heat balance considering various heat fluxes reaching outdoor photobioreactors: solar heat flux, long-wave radiative fluxes, convection, etc. Model parameters and various assumptions were described by Béchet et al. [49]. The viability and photosynthetic activity models were then coupled with these predictions to determine the fractions of viable and active cells over 1 year of operation. The potential coupled impact of high light and high temperature was not accounted for in these simulations, simply because to the best of our knowledge, there is no model available to predict this coupled impact on algal viability. The simulations discussed in this section only aim to provide an estimation of the impact of high temperature only on biofuel production in outdoor cultivation systems. Figure 6 shows that mortality events leading to culture collapse would happen 76 and 131 days per year in Mediterranean and arid climates, respectively. These simulations were based on the model parameters obtained with erythrosine 1 h after exposure to high temperature (see Table 1 for details). Very similar results were obtained when using the model parameters obtained with other sets of parameters (obtained with FDA and/or 6 h after heat exposure; data not shown). This indicates that the variation of model parameters caused by the different experimental techniques during model calibration only caused a small level of uncertainty on the viability predictions in outdoor photobioreactors. Moreover, the number of days when photosynthesis was completely de-activated did not differ by more than 4 days from the number of days when algae died (Fig. 6). This low difference is due to thermal inertia of the closed photobioreactors used in this study. When temperature reached a level of 43 °C, the culture temperature indeed stayed above this lethal temperature for at least 1 h, leading to full loss of both viability and photosynthetic activity. This result indicates that the assumption that de-activated algae cannot recover after thermal stress does not significantly impact productivity predictions in photobioreactors as photosynthesis de-activation is almost automatically followed by algal death. Number of days when viability (white bars) and photosynthetic activity (gray bars) was lower than 1% at the end of the day, and when temperature reached values higher than 43 °C at least once during the day (dark bars)—model parameters for the viability model obtained with erythrosine and 1 h after exposure to high temperature (see Table 1 for details) Because of the practical necessity to grow an inoculum to re-inoculate outdoor photobioreactors, it was assumed that photobioreactors can be re-inoculated 5 days after a culture collapse. Under this assumption, algae could not be cultivated during a significant number of days in photobioreactors located in a Mediterranean or arid climate. The model predicted that between 35 and 40% of the light reaching the photobioreactors would be absorbed by dead algal cells in these two climates. Based on the assumption that biofuel productivity is proportional to the amount of light captured, mortality events can negatively impact the yearly biofuel productivity of outdoor photobioreactors by approximately 35 and 40% in Mediterranean and arid climates, respectively. Choice of the species Based on the frequent occurrence of culture collapses predicted by the viability model, cultivating D. salina is not recommended in Mediterranean and arid climates without temperature control. These conclusions, however, only apply to the case of column photobioreactors having the same geometry than the reactors considered in this study. For example, increasing the reactor radius, and thus its thermal inertia, could lead to minimize temperature fluctuations and therefore avoid regular mortality events. The model developed in this study, when coupled to temperature-predicting models, can therefore be used as an optimization tool for system design to maximize biofuel production. In addition, while this study focused on the algal species D. salina, the same approach could be applied to other algal species, such as Spirulina platensis, which is known to resist to high temperatures [9, 10], or other species more adapted for biofuel production. Unfortunately, many potential biofuel producers, such as Nannochloropsis sp. and Phaeodactylum tricornutum, are marine species which result from billion years of selection in an environment where temperature is generally below 30 °C. The optimal growth temperature of these species is therefore usually below 30 °C [8], and mortality rates may already be important at these temperatures. This model can therefore be adapted and used to determine the best algal species and/or select the optimal location to maximize algal biofuel productivity. Other model applications Modeling the impact of high temperatures on microalgae could have many applications in the study of natural ecosystems such as coral–microalgae symbiosis [16] and estuarine microphytobenthos communities [17]. For example, Laviale et al. [18] measured temperatures as high as 42 °C during summer in top sediment layers in intertidal flats in Portugal (Southern Europe Atlantic coast). Photosynthesis de-activation and even mortality events are therefore likely to happen in microphytobenthos communities. Predicting the impact of these high temperatures on algal photosynthetic activity and viability may be the key to further understand the coupled effects of light and heat stress on these microorganisms. In addition, because of global warming and the subsequent temperature rise in oceans, important rates of algal mortality may occur in marine environments. Considering the high importance of phytoplankton in the food chain, the modeling approach developed in this study may help assessing the impact of global warming on marine ecosystems. Both algal viability and photosynthetic activity of D. salina were significantly affected above a temperature threshold of 43 °C, and their responses over time of exposure to heat were shown to follow a 2-parameters Weibull-like model. Algal viability and photosynthetic activity were both shown to be functions of the thermal dose, defined as the product of time and an exponential function of temperature. The application of the viability and activity models coupled with a physical model predicting temperature fluctuations in closed photobioreactors revealed that cultivating D. salina for biofuel production in this type of cultivation systems is non-viable in arid and Mediterranean climates due to the high occurrence of culture collapses. In the first approximation, the number of culture collapses can be assumed to be equal to the number of days when temperature exceeds the maximal temperature for algal activity and viability. When coupled with models predicting temperature in outdoor cultivation systems, the biological model developed in this study can be used to optimize the combination of algal species, location, and system geometry to maximize system profitability. Out of the context of biofuel production, the model developed in this study could have many other applications in natural ecosystems. Mata TM, Martins AA, Caetano NS. Microalgae for biodiesel production and other applications: a review. Renew Sustain Energy Rev. 2010;14:217–32. Richardson JW, Johnson MD, Outlaw JL. Economic comparison of open pond raceways to photo bio-reactors for profitable production of algae for transportation fuels in the Southwest. Algal Res. 2012;1:93–100. Rogers JN, Rosenberg JN, Guzman BJ, Oh VH, Mimbela LE, Ghassemi A. A critical analysis of paddlewheel-driven raceway ponds for algal biofuel production at commercial scales. Algal Res. 2013;4:76–88. doi:10.1016/j.algal.2013.11.007. Slade R, Bauen A. Micro-algae cultivation for biofuels: cost, energy balance, environmental impacts and future prospects. Biomass Bioenergy. 2013;53:29–38. Béchet Q, Shilton A, Guieysse B. Modeling the effects of light and temperature on algae growth: state of the art and critical assessment for productivity prediction during outdoor cultivation. Biotechnol Adv. 2013;31:1648–63. Bernard O, Mairet F, Chachuat B. Modelling of microalgae culture systems with applications to control and optimization. In: Posten C, Feng Chen S, editors. Microalgae biotechnol. Cham: Springer; 2016. p. 59–87. Lee E, Jalalizadeh M, Zhang Q. Growth kinetic models for microalgae cultivation: a review. Algal Res. 2015;12:497–512. Bernard O, Rémond B. Bioresource Technology Validation of a simple model accounting for light and temperature effect on microalgal growth. Bioresour Technol. 2012;123:520–7. doi:10.1016/j.biortech.2012.07.022. Torzillo G, Pushparaj B, Bocci F, Balloni W, Materassi R, Florenzano G. Production of Spirulina biomass in closed photobioreactors. Biomass. 1986;11:61–74. Tredici MR, Materassi R. From open ponds to vertical alveolar panels: the Italian experience in the development of reactors for the mass cultivation of phototrophic microorganisms. J Appl Phycol. 1992;4(3):221–31. doi:10.1007/BF02161208. Mathur S, Agrawal D, Jajoo A. Photosynthesis: response to high temperature stress. J Photochem Photobiol B Biol. 2014;137:116–26. doi:10.1016/j.jphotobiol.2014.01.010. Zhang L, Liu J. Effects of heat stress on photosynthetic electron transport in a marine cyanobacterium Arthrospira sp. J Appl Phycol. 2016;28:757–63. doi:10.1007/s10811-015-0615-4. Béchet Q, Shilton A, Guieysse B. Full-scale validation of a model of algal productivity. Environ Sci Technol. 2014;48:13826–33. Ras M, Steyer J-P, Bernard O. Temperature effect on microalgae: a crucial factor for outdoor production. Rev Environ Sci Bio/Technol. 2013;12:153–64. doi:10.1007/s11157-013-9310-6. Hancke K, Hancke TB, Olsen LM, Johnsen G, Glud RN. Temperature effects on microalgal photosynthesis-light responses measured by o2 production, pulse-amplitude-modulated fluorescence, and (14) C assimilation(1). J Phycol. 2008;44:501–14. Downs CA, McDougall KE, Woodley CM, Fauth JE, Richmond RH, Kushmaro A, et al. Heat-stress and light-stress induce different cellular pathologies in the symbiotic dinoflagellate during coral bleaching. PLoS ONE. 2013;8:e77173. Blanchard G, Guarini J, Richard P, Gros P, Mornet F. Quantifying the short-term temperature effect on light-saturated photosynthesis of intertidal microphytobenthos. Mar Ecol Prog Ser. 1996;134:309–13. Laviale M, Barnett A, Ezequiel J, Lepetit B, Frankenbach S, Méléder V, et al. Response of intertidal benthic microalgal biofilms to a coupled light—temperature stress: evidence for latitudinal adaptation along the Atlantic coast of Southern Europe. Environ Microbiol. 2015;17:3662–77. doi:10.1111/1462-2920.12728. Salleh S, McMinn A. The effects of temperature on the photosynthetic parameters and recovery of two temperate benthic microalgae, Amphora cf. coffeaeformis and Cocconeis cf. sublittoralis (Bacillariophyceae). J Phycol. 2011;47:1413–24. Vieira S, Ribeiro L, Marques da Silva J, Cartaxana P. Effects of short-term changes in sediment temperature on the photosynthesis of two intertidal microphytobenthos communities. Estuar Coast Shelf Sci. 2013;119:112–8. doi:10.1016/j.ecss.2013.01.001. Serra-Maia R, Bernard O, Gonçalves A, Bensalem S, Lopes F. Influence of temperature on Chlorella vulgaris growth and mortality rates in a photobioreactor. Algal Res. 2016;18:352–9. Bonnefond H, Moelants N, Talec A, Mayzaud P, Bernard O, Sciandra A. Coupling and uncoupling of triglyceride and beta-carotene production by Dunaliella salina under nitrogen limitation and starvation. Biotechnol Biofuels. 2017;10:25. doi:10.1186/s13068-017-0713-4. Buchanan RL, Golden MH, Whiting RC, Phillips Jg, Smith JL. Non-thermal inactivation models for Listeria monocytogenes. J Food Sci. 1994;59:179–88. doi:10.1111/j.1365-2621.1994.tb06928.x. Bhaduri S, Smith PW, Palumbo SA, Turner-Jones CO, Smith JL, Marmer BS, et al. Thermal destruction of Listeria monocytogenes in liver sausage slurry. Food Microbiol. 1991;8:75–8. Chen H, Hoover DG. Modeling the combined effect of high hydrostatic pressure and mild heat on the inactivation kinetics of Listeria monocytogenes Scott A in whole milk. Innov Food Sci Emerg Technol. 2003;4:25–34. Xiong R, Xie G, Edmondson A, Linton R, Sheard M. Comparison of the Baranyi model with the modified Gompertz equation for modelling thermal inactivation of Listeria monocytogenes Scott A. Food Microbiol. 1999;16:269–79. Geeraerd AH, Herremans CH, Van Impe JF. Structural model requirements to describe microbial inactivation during a mild heat treatment. Int J Food Microbiol. 2000;59:185–209. Sun D-W. Handbook of food safety engineering. Blackwell: Wiley; 2012. van Boekel M. On the use of the Weibull model to describe thermal inactivation of microbial vegetative cells. Int J Food Microbiol. 2002;74:139–59. Buzrul S, Alpas H. Modeling the synergistic effect of high pressure and heat on inactivation kinetics of Listeria innocua: a preliminary study. FEMS Microbiol Lett. 2004;238:29–36. Moats WA. Kinetics of thermal death of bacteria. J Bacteriol. 1971;105:165–71. Cole MB, Davies KW, Munro G, Holyoak CD, Kilsby DC. A vitalistic model to describe the thermal inactivation of Listeria monocytogenes. Journal of Industrial Microbiology. 1993;12(3):232–9. doi:10.1007/BF01584195. Anderson WA, McClure PJ, Baird-Parker AC, Cole MB. The application of a log-logistic model to describe the thermal inactivation of Clostridium botulinum 213B at temperatures below 121.1°C. J Appl Microbiol. 1996;80(3):283–90. doi:10.1111/j.1365-2672.1996.tb03221.x. Little CL, Adams MR, Anderson WA, Cole MB. Application of a log-logistic model to describe the survival of Yersinia enterocolitica at sub-optimal pH and temperature. Int J Food Microbiol. 1994;22:63–71. Raso J, Alvarez I, Condon S, Trepat FJS. Predicting inactivation of Salmonella senftenberg by pulsed electric fields. Innov Food Sci Emerg Technol. 2000;1:21–9. Guillard RRL, Ryther JH. Studies of marine planktonic diatoms: I. Cyclotella Nana Hustedt and Denotula Confervacea (CLEVE) Gran. Can J Microbiol. 1962;8:229–39. doi:10.1139/m62-029. Brussaard CPD, Marie D, Thyrhaug R, Bratbak G. Flow cytometric analysis of phytoplankton viability following viral infection. Aquat Microb Ecol. 2001;26:157–66. Markelova AG, Vladimirova MG, Kuptsova ES. A comparison of cytochemical methods for the rapid evaluation of microalgal viability. J Plant Physiol. 2000;47(6):815–9. doi:10.1023/A:1026619514661. Berglund DL, Taffs RE, Robertson NP. A rapid analytical technique for flow cytometric analysis of cell viability using calcofluor white M2R. Cytometry Part A. 1987;8(4):421–6. doi:10.1002/cyto.990080412. Schreiber U, Klughammer C, Kolbowski J. Assessment of wavelength-dependent parameters of photosynthetic electron transport with a new type of multi-color PAM chlorophyll fluorometer. Photosynth Res. 2012;113:127–44. doi:10.1007/s11120-012-9758-1. White AJ, Critchley C. Rapid light curves: a new fluorescence method to assess the state of the photosynthetic apparatus. Photosynth Res. 1999;59:63–72. Kim Tiam S, Laviale M, Feurtet-Mazel A, Jan G, Gonzalez P, Mazzella N, et al. Herbicide toxicity on river biofilms assessed by pulse amplitude modulated (PAM) fluorometry. Aquat Toxicol. 2015;165:160–71. Genty B, Briantais J-M, Baker NR. The relationship between the quantum yield of photosynthetic electron transport and quenching of chlorophyll fluorescence. Biochim Biophys Acta Gen Subj. 1989;990:87–92. Silsbe GM, Kromkamp JC. Modeling the irradiance dependency of the quantum efficiency of photosynthesis. Limnol Oceanogr Methods. 2012;10:645–52. doi:10.4319/lom.2012.10.645. Eilers PHC, Peeters JCH. A model for the relationship between light intensity and the rate of photosynthesis in phytoplankton. Ecol Model. 1988;42:199–215. Henley WJ. Measurement and interpretation of photosynthetic light-response curves in algae in the context of photoinhibition and diel changes. J Phycol. 1993;29:729–39. doi:10.1111/j.0022-3646.1993.00729.x. Béchet Q, Feurgard I, Guieysse B, Lopes F. The colorimetric assay of viability for algae (CAVA): a fast and accurate technique. J Appl Phycol. 2015;27:1–9. Hirsch LR, Stafford RJ, Bankson JA, Sershen SR, Rivera B, Price RE, et al. Nanoshell-mediated near-infrared thermal therapy of tumors under magnetic resonance guidance. Proc Natl Acad Sci. 2003;100:13549–54. Béchet Q, Shilton A, Fringer OB, Muñoz R, Guieysse B. Mechanistic modeling of broth temperature in outdoor photobioreactors. Environ Sci Technol. 2010;44:2197–203. QB, HB, ML, and OB conceived and designed the study. NA, QB, HB, and ML performed the experiments. All authors analyzed the data. OB provided financial support. QB wrote the manuscript. ML, HB, and OB revised the manuscript. All authors read and approved the final version of the manuscript. The authors are grateful to Francis Mairet (Inria BIOCORE) for his help during model development. The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request. This work was financially supported by the ANR Purple Sun project (ANR-13-BIME-004) and the Inria Project Lab Algae in silico. Université Côte d'Azur, Inria, BIOCORE, BP 93, 06902, Sophia Antipolis Cedex, France Quentin Béchet, Martin Laviale, Nicolas Arsapin, Hubert Bonnefond & Olivier Bernard Sorbonne Universités, UPMC Université Paris 06, CNRS, UMR 7093, LOV, Observatoire océanologique, 06230, Villefranche/Mer, France Quentin Béchet Martin Laviale Nicolas Arsapin Hubert Bonnefond Correspondence to Quentin Béchet. Additional file Additional file 1: S1. Cytometry analysis. S2. An example of Pulse Amplitude Modulation (PAM) fluorometry analysis. S3. Uncertainty analysis via Monte-Carlo simulations. S4. Comparison of Weibull and first-order fits to experimental data. S5. Viability results 6 hours after heat exposure. S6. Evolution of the photosynthetic activity during kinetic studies. Béchet, Q., Laviale, M., Arsapin, N. et al. Modeling the impact of high temperatures on microalgal viability and photosynthetic activity. Biotechnol Biofuels 10, 136 (2017). https://doi.org/10.1186/s13068-017-0823-z Dunaliella salina Thermal dose Outdoor cultivation
CommonCrawl
US/Eastern English DPF 2011 8 Aug 2011, 18:00 → 13 Aug 2011, 16:00 US/Eastern One Sabin Street Providence, RI 02903 David Cutts (Department of Physics-Brown University-Unknown), Meenakshi Narain (Brown University), Ulrich Heintz (Department of Physics-Brown University) 2011 Meeting of the Division of Particles and Fields of American Physical Society (see http://www.hep.brown.edu/~DPF2011/) Indico Support [email protected] Monday, 8 August Mon, 8 Aug Tue, 9 Aug Wed, 10 Aug Fri, 12 Aug 18:00 → 19:00 Registration Lobby (Biltmore Hotel) Tuesday, 9 August Breakfast 30m Plenary Session Plenary room (Ballroom B+C) Plenary room (Ballroom B+C) Convener: Ulrich Heintz (Department of Physics-Brown University) Welcome 5m Speaker: Prof. James Valles (Chair Physics Department - Brown University) Welcome 10m Speaker: Prof. Clyde Briant (Vice President for Research (Brown University)) Electroweak Physics 30m Speaker: Michael Henry Schmitt (Department of Physics and Astronomy-Northwesten University) Beyond the Standard Model: theory 30m Speaker: Konstantin Matchev (Department of Physics-University of Florida) Beyond the Standard Model: experiment 30m Speaker: George Redlinger (Brookhaven National Laboratory (BNL)) Coffee 30m Convener: Kate Scholberg (Duke University) Neutrino Physics: theory 30m Speaker: Prof. Lisa Everett Everett (University of Wisconsin) Neutrino Physics: experiment 30m Speaker: Jennifer Raaf (FNAL) Searches for BSM Physics at Low Energy 30m Speaker: Krishna Kumar (Physics Department University of Massachusetts Amherst) Forum on Physics and Modern Media Ballroom A Ballroom A Panel Discussion 1h 10m Chair - Ken Bloom (Nebraska), Gordon Watts (UW Seattle) Speakers: Adrian Cho, Lisa Van Pay, Michael Henry Schmitt (Department of Physics and Astronomy) Lunch 1h 30m The posters will be displayed until Friday 14:00 Conveners: Greg Landsberg (Brown University), Meenakshi Narain (Brown University), Ulrich Heintz (Department of Physics-Brown University) Are Leptons and Quarks Highly Relativistic Bound States? 1m Because the existence of families of elements and hadrons was ultimately understood by the realization that atoms and hadrons are composite, in the 1970's many physicists thought that the existence of the four families of leptons and quarks could be understood if leptons and quarks were composite. By the early 1980's, however, the physics community had given up on the idea because it had not been possible to determine the force that binds constituents into leptons and quarks. The development of supercomputers now makes it feasible to study such highly relativistic bound states. Here the possibility is discussed that leptons and quarks are highly relativistic bound states of a scalar and spin-1/2 fermion bound by minimal electrodynamics. These bound states are described by the Bethe-Salpeter equation and have the following three properties, all of which are essential if quarks and leptons are composite: (1) The boundary conditions allow strongly bound solutions when the coupling constant has a magnitude on the order of the fine structure constant. Typically the coupling constant for strongly bound solutions is on the order of or greater than unity. (2) All strongly bound, normalizable solutions must have spin-1/2 if the coupling constant has a magnitude on the order of the fine structure constant. It is remarkable that higher spin, strongly bound solutions are forbidden. (3) Some strongly bound solutions possess a property that suppresses the unobserved decay of a muon into an electron and a photon. Speaker: G. Bruce Mainland (The Ohio State University at Newark) Interparticle forces in scalar QFTs with non-linear mediating fields 1m We study the interparticle potentials for few-particle systems in a scalar theory with a non-linear mediating field of the Higgs type. We use the variational method, in a reformulated Hamiltonian formalism of QFT, to derive relativistic three and four particle wave equations for stationary states of these systems. We show that the cubic and quartic non-linear terms modify the attractive Yukawa potentials, which are dominant at small interparticle distances, by new terms that produce confining and quasi-confining interparticle potentials. Speaker: Mr Alexander Chigodaev (York University) Complex Path Integral as a Fractional Fourier Transform 1m Lately, it is becoming increasingly clear that extending the Feynman Path Integral into the Complex domain yields desirable properties. A first hint in support of such construction can be seen from the connection between SUSY Quantum Mechanics and the Langevin dynamics: analytically continuing the Langevin leads into different SUSY Quantum Mechanical systems (which share the same algebra of observables). Secondly, new results by E. Witten have brought forward new results in 3-dimensional Chern-Simons theory (in the form of Complex geometries) and Super Yang-Mills theory in four dimensions, as well as the relation between Khovanov homology and systems of branes. Given a system of D0-branes, it is possible to understand it in terms of a Fourier Transform. As such, we can extend this system into the Complex plane, reinterpreting the Path Integral as a Fourier Transform over a certain integration cycle, which results in a Fractional Fourier Transform. This can be further understood in terms of the Phase Space of this system, where the Fractional Fourier Transform is related to the Wigner function. In this way, we realize that the label of our Fractional Fourier Transform, which is the Path Integral quantizing the system, acts as the parameter determining the vacuum state. Therefore, the allowed values of this label, for which the Path Integral converges, determine the quantum phases of the system. This can be immediately extended to Matrix models and Lie algebra-valued ones (known as Group Field Theory): the same results hold, so long as certain properties of the Action are satisfied, guaranteeing the convergence of the Path Integral. These results can be dimensionally extended to systems of Dp-branes, showing some relations with the Geometric Langlands Duality and Mirror Symmetry. Furthermore, they can also be understood in terms of coherent state quantization, which opens a window into quantum tomography, and quantum chaos. Speaker: Dr Daniel Ferrante (Brown University) NS5 Branes on the Resolved Cone over $Y^{p,q}$ 1m The AdS/CFT correspondence provides a powerful tool to attack very important questions of strong coupling dynamics using gravitational duals. The Klebanov-Stassler prototype has a large family of duals that contain ${\cal N}=1$ SYM. A new and distinct family of supergravity solutions containing a sector dual to ${\cal N}=1$ SYM might be related to the resolved cone over Einstein-Sasaki spaces. In this work we extend the construction of five brane solutions on the resolved cone over $Y^{p,q}$ spaces by expanding the generalizations of the complex deformations in the context of the warped resolved deformed conifold. This work augments recent work which established the existence of supersymmetric five branes solutions wrapped on two-cycles of the resolved cone over $Y^{p,q}$ in the probe limit. We present an ansatz and the corresponding equations of motion. Here we attempt to solve the field equations and give explicit solutions with the expected properties for theory related to strongly coupled Yang-Mills theories. Speaker: Catherine Whiting (University of Iowa) The Value of the Newton Gravitation Constant Derived From a Combined Sakharov and Kaluza-Klein Model of Baryo-Genesis and Gravity EM Unification 1m A model of Sakharov (1967) baryo-genesis where the 5th , compactified, dimension of Kaluza-Klein (1926) theory does 'double duty' as the creator of lepton-baryon numbers and creator of separate EM and gravity equations is proposed. The appearance of the compactified dimension breaks the symmetry of the Planckian vacuum, allowing the vacuum quantities hbar, G, and c to generate the particle quantities e, mp and me , which are the charges and masses of the proton and electron respectively. Under this model the lepton-baryon asymmetry is a reflection time-space dimensional asymmetry and the relationship of the hidden dimension size, ( in esu units) ro = e^2/(moc^2) ,where c is the speed of light, where mo=(mpme)^1/2 , to the Planck Length, rp =(Ghbar/c^3)^1/2 : is ln(ro/rp) = (mp/me)^1/2 and mirrors the lepton-baryon separation. Inversion of this formula leads to a highly accurate formula for G, the Newton Gravitaion constant. Improvement of this model to apply corrections near the Planck scale results a formula for G further improvement in accuracy( in esu): G = alpha (e^2/mo^2 )exp( -2((mp/me)^1/2 -.86/(mp/me) …) = 6.6728 x 10^-8 dyn-cm^2-g^-2, where alpha is the fine structure constant. In the GEM theory of long range field unification ( Brandenburg 1988, ,1995, 2010) gravity fields are equivalent to an array of ExB drift cells or Poynting vectors and EM and gravity fields separate with the appearance of the Kaluza-Kline 5th dimension. The predicted hidden dimension size is 3000.0 MeV and lies right between the eta-c and J/psi particles and almost exactly on the Sigma (3000) baryon. Assuming a model where the proton-electron ( lepton-baryon) field unification occurs in a U(1) symmetry with imaginary rotation angle determined by normalized charge q/e and a multiplier ln(s') where s' = (mp/me)^1/2 ( the square root of the mass ratio) we obtain approximately, with qP =chbar ( the Planck charge) , we obtain the approximate relation: MP/mp = exp(ln(s')qP/e) , where MP is the Planck Mass, when combined with the previous relations, we obtain, to leading order, "The Transcendental Cosmos Equation" relating the value of alpha to s': s' = ln(s') (1/alpha^1/2 +1) –ln(1/alpha)~42.85...This is similar to the "MIT Bag Model" (Chodos et al. ,1974)result s' ~ (4pi/alpha)^1/2 Humorously, this recalls the number "42" which appeared in Hichhikers Guide to the Galaxy as the " answer to life, the universe, and everything" however, this author makes no such claims. Brandenburg J.E. ( 1988) APS Bull.,33, 1, p.32.Brandenburg, J. E., (1995), Astrophysics and Space Science, 227, pp. 133.Brandenburg J.E. ( 2010) OSAPS Meeting .Chodos, R. L. Jaffe, K. Johnson, and C. B. Thorn,(1974) Phys. Rev. D 10, 2599 Klein, Oskar Zeitschrift fur Physik, 37, 895, (1926).Sakharov A.D. JETP 5,24-27, (1967). Speaker: Dr john brandenburg (Orbital Technologies Corporation) Theory of EW interactions with dynamically generated scalars, gauge fixings, and masses of Z and W bosons 1m A new theory of the EW interactions without spontaneous symmetry breaking, Higgs, and Fadeev-Popov procedure is presented in this talk. It consists of three parts: $SU(2)_L\times U(1)$ gauge fields, massive fermion fields, and their interactions. New mechanism of $SU(2)_L\times U(1)$ symmetry breaking caused by the fermion masses are found. Nonperturbative solutions are found. The vacuum polarization of the Z field is expressed as \[\Pi_{\mu\nu}(q^2)=\{F_1(q^2)(q_\mu q_\nu-q^2 g_{\mu\nu})+F_2(q^2)q_\mu q_\nu+{1\over2}\Delta m^2_Z g_{\mu\nu}\}.\] Therefore, both the gauge fixing term($F_2$) and the mass term of the field are dynamically generated from the fermion masses. Top quark mass plays a dominant role. No zero $\partial_\mu Z^\mu$ leads to a scalar field and a gauge fixing term for the Z field. The mass of the scalar field is determined to be \[m_{\phi^0}=m_t e^{\frac{m^2_Z}{m^2_t}\frac{16\pi^2}{3\bar{g}^2}+1}=3.78\times10^{14}GeV.\] The gauge fixing is determined to be \[\xi_z=-1.18\times10^{-25}.\] After renormalization it is determined \[m_z={1\over2}\bar{g}^2 m^2_t\] it agrees well with the data. Similarly, the vacuum polarization of W boson is found. A charge scalar field is dynamically generated \[m_{\phi^{\pm}}=m_t e^{\frac{m^2_W}{m^2_t}\frac{16\pi^2}{3g^2}}=9.31\times10^{13}GeV.\] \[\xi_W=-3.73\times10^{-25}.\] After renormalization the mass of the W boson is determined as \[m^2_W={1\over2}g^2 m^2_t.\] It agrees well with the data. It also obtain \[\frac{m^2_W}{m^2_Z}=\frac{g^2}{\bar{g}^2}=cos^2\theta_W.\] \[G_F=\frac{1}{2\sqrt{2}m^2_t}.\] The Fermi coupling constant in good agreement with data. The propergators of Z- and W- fields are derived as \[\Delta_{\mu\nu}^Z= \frac{1}{q^2-m^2_Z}\{-g_{\mu\nu}+(1+\frac{1}{2\xi_Z})\frac{q_\mu q_\nu}{ q^2-m^2_{\phi^0}}\}\] \[\Delta^W_{\mu\nu}= \frac{1}{q^2-m^2_W}\{-g_{\mu\nu}+(1+\frac{1}{2\xi_W})\frac{q_\mu q_\nu}{ q^2-m^2_{\phi_W}}\}\] This theory can be tested by LHC experiments. Speaker: Bing An Li (University of Kentucky) On the Smallness of the Dark Energy Density in Split SUSY Models Inspired by Degenerate Vacua 1m It is well known that in no--scale supergravity global symmetries protect local supersymmetry (SUSY) and a zero value for the cosmological constant. The breakdown of these symmetries that ensures the vanishing of the vacuum energy density near the physical vacuum leads to the natural realization of the multiple point principle (MPP) assumption, i.e. results in the set of degenerate vacua with broken and unbroken local supersymmetry. We present the minimal SUGRA model where the MPP assumption is realised naturally at the tree--level. In this model vacua with broken and unbroken local supersymmetry in the hidden sector (first and second phases) have the same energy density without any extra fine-tuning. Although hidden sector does not give rise to the breakdown of supersymmetry in the second phase SUSY may be broken there dynamically in the observable sector. Then a positive value of the energy density in the second vacuum is induced which can be assigned, by virtue of MPP, to all other phases including the one in which we live. The total vacuum energy density is naturally tiny or zero in this case. If gauge couplings in the physical and second vacua are the same then the dark energy density depends on the SUSY breaking scale in the physical vacuum only. Assuming Split SUSY type spectrum we argue that the observed value of the cosmological constant can be reproduced if the masses of squarks and sleptons are of order of 10^{10} GeV. Speaker: Dr Roman Nevzorov (University of Hawaii) Sticky Dark Matter 1m There is experimental evidence that Dark Matter (DM) makes up about 25\% of the Universe mass and is most likely non­relativistic. We explore possibility of creation and existence of bound states of Dark Matter and standard model (SM) particles. Such bound states can be potentially created and detected during direct DM search experiments (DAMA, CDMS, XENON etc.). We work in model-­independent effective field theoretic approach to determine conditions under which such bound states can be created. Our results appear to be dependent on nuclei used in DM direct detection experiments. In this scenario we determine the region of DM parameter space that provides simultaneous fit to DAMA and CDMS data. Speaker: Prof. Alexey Petrov (Wayne State University) Possible Existence of Overlapping Universe and Antiuniverse 1m The creation of antihydrogens at CERN (1995) and Fermilab (1997) and the very recent synthesizing of antiheliums at CERN (2011) have invigorated the fascinating idea about the existence of separate universe and antiuniverse as a resultant of the big bang. Particularly, the production of exotic atoms composed of particles and antiparticles (e.g. positroniums, Ps, protoniums, Pn, true muoniums, Mu, and pioniums, A2π) as well as the experimental formation of positronium molecules, Ps2, and theoretical predictions of exotic four-body systems composed of matter and antimatter (e.g. Heterohydrogens, PsPn, Ps Mu, PsA2π ) open the gate in front of new research activities aiming to investigate the possible existence of overlapping universe and antiuniverse. The main goal of the present work is to discuss this possibility and show that it provides us with satisfactory explanations of the dilemma connected with the rare existence of antimatter in our universe and the mysterious astrophysical observation of highly energetic gamma-rays occurring at the edge of our universe. Speaker: Prof. Mohamed Assad Abdel-Raouf (United Arab Emirates Un iversity, College of Science) Strictly Calculate the Electron Mixing-loop Chain Propagator in SM 1m Employing the electroweak standard model(SM), we analyzed and discussed the framework form of electron mixing-loop chain propagator and its renormalization in detail; then achieved the analytic count. Based on it we acquired the analytic solution of electron mixing-loop chain propagator which composed of serious of different physical loops. This study would offer certain academic reference to the discussion and applying about normal complex propagators in both theoretical analyzing and application. Speaker: Dr Haiyan Tang (Department of Mathematics and Physics, Chongqing University of Science and Technology, Chongqing, China) Missing Transverse Energy Significance 1m The missing transverse energy (MET) plays a fundamental role in the search for physics beyond the Standard Model at the LHC. We present an event-by-event assessment, the MET significance, of whether the observed MET is consistent with arising solely from detector-related limitations, such as measurement resolution and detection or reconstruction efficiency. We will introduce the formal definition of the significance, discuss our implementation, and show the results of performance studies of the particle flow MET significance in di-jet and W → e + ν data samples collected with the CMS detector. Speaker: Dr Dayong Wang (Department of Physics-University of Florida) On a Singular Solution in Higgs Field (2) - A Representation of Certain f0 Mesons' Masses. 1m In preceding paper 1) the mass and the basic structure of SM Higgs boson (H0) were discussed by obtaining asymptotic solution for the Euler-Lagrange equation of nonlinear Klein-Gordon type, in Higgs field with newly developed mass triangle method. In this paper we at first see that the ground state mass of glueball (GB) is calculated at 502.55 MeV/c2 which is expected as f0(600) meson's mass. The GBs will attract mutually with neighbors among original their components of gluons in different colors, so that they could gradually form cluster. And we show that our computed masses of f0(1370), f0(1500) and f0(1710) are within each f0 meson's mass from experiment while they will construct respective fullerene structure for ur-H0 as well as f0(600), provided that the mass of ur-H0 (120.611 GeV/c2) will consist of a number of masses of GB or f0 in which all (pure) GB fullerene may have an icosahedral (Ih) rotational symmetry. Finally we propose a representation by which f0 mesons masses above are reproduced respectively with masses of several light pseudoscalar mesons such as η0, K0, K0_bar, K±, π0, π± and GB, under the consideration of those junction networks. Where the mass of f0(1500) is described only by the mass of GB. And also ur-H0 will transform into H0 under mass invariance through, for instance, γf0 reaction to ηc as its component via radiative decay of J/ψ. Along with these discussions, a massive gluon propagator for virtual top quark-pair decay is calculated by Bethe-Salpeter equation. 1)Kazuyoshi Kitazawa, APS APRIL MEETING 2011, K1.00034. Speaker: Kazuyoshi Kitazawa (Mitsui Chemicals) Semi-leptonic D_s^+ (1968) Decays as a Scalar Meson Probe 1m The unusual multiplet structures associated with the light spin zero mesons have recently attracted a good deal of theoretical attention. Here we present some aspects associated with the possibility of getting new experimental information on this topic from semi-leptonic decays of heavy charged mesons into an isosinglet scalar or pseudoscalar plus leptons. Speaker: Muhammad Shahid (Syracuse University) Sanford Underground Laboratory at Homestake 1m The status of the Sanford Underground Laboratory at Homestake in Lead, South Dakota will be presented. Excavation of new underground facilities at 4850 feet (about 1480 m) has been completed. Outfitting of the excavated space to house and support the Large Underground Xenon (LUX) detector searching for dark matter and the MAJORANA DEMONSTRATOR neutrinoless double-beta decay experiment is underway and is anticipated to be complete by early 2012. The capability to produce very low background copper by electroforming for the MAJORANA DEMONSTRATOR experiment is now operational at the 4850-foot level. Experiments associated with research in underground biology and geosciences are underway or planned at the Sanford Laboratory. Speaker: Dr Jaret Heise (Sanford Underground Laboratory at Homestake) Accelerator Physics 557 Conveners: R. Joel England (SLAC), Dr Vladimir Shiltsev (Fermilab) Tevatron Accelerator Methods and Techniques Applicable for Future Accelerators 30m The success of Tevatron Run II is based on advances in accelerator physics, as well as on the excellence and advances in engineering, instrumentation and machine operation. We review the main advances in accelerator physics which contributed to the luminosity growth and/or improvement of the Tevatron complex operations, and discuss their applicability to future colliders. Speaker: Mr Valeri Lebedev (FNAL) High-luminosity operation of RHIC and future upgrades 30m The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory has now operated for a decade. Over this time the 2 physics programs at RHIC, based on heavy ion and polarized proton collisions respectively, have seen a substantial increase in performance and a variety of operating modes. The performance increases are presented with the dominant limiting effects, and upgrade plans for the next decade. The heavy ion luminosity upgrade is primarily based on stochastic cooling in store, and an increase in the longitudinal focusing. A new polarized source is expected to increase both the polarization and luminosity. For the latter electron lenses are also implemented to partially compensate the head-on beam-beam effect. In addition, a number of new operating modes are considered. Speaker: Dr Wolfram Fischer (BNL) Status of the Super-B factory Design 30m The SuperB international team continues to optimize the design of an electron-positron collider, which will allow the enhanced study of the origins of flavor physics. The project combines the best features of a linear collider (high single-collision luminosity) and a storage-ring collider (high repetition rate), bringing together all accelerator physics aspects to make a very high luminosity of 10^36 cm−2 sec−1. This asymmetric-energy collider with a polarized electron beam will produce hundreds of millions of B-mesons at the Υ(4S) resonance. The present design is based on ex- tremely low emittance beams colliding at a large Piwin- ski angle to allow very low βy⋆ without the need for ultra short bunches. Use of crab-waist sextupoles will enhance the luminosity, suppressing dangerous resonances and allowing for a higher beam-beam parameter. The project has flexible beam parameters, improved dynamic aperture, and spin-rotators in the Low Energy Ring for longitudinal po- larization of the electron beam at the Interaction Point. Optimized for best colliding-beam performance, the facility may also provide high-brightness photon beams for synchrotron radiation applications. Speaker: Dr Walter Wittmer (SLAC) Beyond the Standard Model Plenary Ballroom Plenary Ballroom Convener: Dr Ariel Schwartzman (SLAC) MiniReview of Beyond the Standard Model Physics 30m Over the last 30 to 40 years the Standard Model has had remarkable agreement with a large variety of experimental tests. With the large accumulated datasets at the Tevatron and the LHC the TeV scale is now being probed in detail. In this talk we will review the state of searches for physics beyond the Standard Model. In particular, we will focus on the recent results from the LHC which significantly expand the limits on many models and discuss a few interesting anomalies which have been observed at the Tevatron. Speaker: Kevin Black (Department of Physics-Boston University) Search for Heavy Vector-like Quarks at ATLAS in pp Collisions at $\sqrt{s}$=7 TeV 20m We perform a search for vector-like quarks more massive than the top quark coupling to lighter generations using data collected with the ATLAS detector. The W and Z bosons are reconstructed in the $W\rightarrow l^{\pm}\nu$ and $Z\rightarrow l^{+}l^{-}$ where $l=e,\mu$. The vector-like quark is reconstructed from the W or Z and the highest $p_{T}$ jet. Speaker: Mr Samuel Meehan (University of Chicago) Search for Fourth Generation Quarks at CMS 20m The Standard Model with three generations of quarks describes remarkably well all particle phenomena observed to date. Although adding a fourth generation of massive fermions is an obvious extension of the model, it became less popular ever since the limit on light neutrino flavors, and the precise measurements on the electroweak parameters, seem to disfavor such a possibility. However, indirect limits can never replace direct search for heavy particles. We present the results of a search for the heavy fourth generation quark using the CMS detector in pp collisions of the Large Hadron Collider. Speaker: Michael Luk (Brown University) Model independent search for new phenomena in ppbar collisions at sqrt(s) = 1.96 TeV 20m We present a model independent search for physics beyond the standard model in lepton final states. We examine data in 120 unique final states from 1.07 fb-1 of at ppbar collisions at sqrt(s) = 1.96 TeV collected with the D0 detector. We conclude that all discrepancies seen can be attributed to modeling issues and do not claim evidence of new physics. Speaker: Dr Peter Renkel (postdoc) Detector Technology and R&D 556 Conveners: James Brau (Univ. of Oregon), Maurice Garcia-Sciveres (LBNL) Upgrade plans of the CMS detector 30m Overview of upgrade plans. Speaker: Daniela Bortoletto (Purdue) ATLAS detector upgrade plans 30m Overview of plans for ATLAS detector upgrades up to 2022. Speaker: Venetios Polychronakos (Department of Physics-Brookhaven National Laboratory (BNL)) A Fast Hardware Tracker for the ATLAS Trigger System 15m In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of 10^34 to 10^35 / cm^2 / second. A multi-level trigger strategy is used in ATLAS, with the first level (L1) implemented in hardware and the second and third levels (L2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at L2 is an important element to achieve these needs. As the instantaneous luminosity increases, the computational load on the L2 system will significantly increase due to the need for more sophisticated algorithms to suppress backgrounds. The Fast Tracker (FTK) is a proposed upgrade to the ATLAS trigger system. It is designed to enable early rejection of background events and thus leave more L2 execution time by moving track reconstruction into a hardware system that takes massively parallel processing to the extreme. The FTK system completes global track reconstruction with near offline resolution shortly after the start of L2 processing by rapidly finding and fitting tracks in the inner detector for events passing L1 using pattern recognition from a large, pre-computed bank of possible hit patterns. We describe the FTK system design and expected performance in the areas of b-tagging, tau-tagging, and lepton isolation which play and important role in the ATLAS physics program. Speaker: Mark Neubauer (University of Illinois at Urbana-Champaign) Electroweak Physics 552 A Conveners: Prof. Doreen Wackeroth (SUNY Buffalo), Prof. Sridhara Dasu (University of Wisconsin) Review of Electroweak Physics at Hadron Colliders 40m Experiments at the Tevatron and the LHC have recently studied the electroweak gauge sector of the Standard Model with impressive breadth and precision. In this talk we discuss several new experimental results, and the theoretical progress they have spurred. Speaker: Frank Petriello (Northwestern University) Measurement of the differential production cross section of Z bosons at 7 TeV 20m In 2010 the CMS experiment collected about 35 pb-1 of data during the first physics run of the LHC accelerator. We present the first measurements of the differential cross section as a function of boson rapidity and transverse momentum for Z bosons produced at 7 TeV and decaying to pairs of electrons and muons. The data are unfolded and corrected for efficiencies, allowing a direct comparison to recent theoretical calculations. Speaker: Joseph Anthony Gartner (University of Florida) Measurement of the Drell-Yan differential cross section at 7 TeV 20m We present a measurement of the Drell-Yan differential cross section in pp collisions as a function of the dilepton invariant mass (dsigma/dm). The data sample was collected by the CMS detector at the LHC operating at a center-of-mass energy of 7 TeV during 2010 and 2011. The results are compared to predictions of the Standard Model. Speaker: Stoyan Emilov Stoynev (Department of Physics and Astronomy-Northwesten University) W and Z production in the forward region with the LHCb experiment 20m Results are presented of W and Z boson production in pp collisions at $\sqrt(s)=7$ TeV by the LHCb experiment. These studies are of particular interest due to LHCb's unique forward acceptance in pseudo-rapidity ($\eta$) of 2 $< \eta <$ 5. The results may either be interpreted as a test of Standard Model predictions, or may be used to constrain better parton density functions in this kinematical regime. Speaker: Jonathan Anderson (Universitaet Zuerich) Heavy Flavor Physics: Quarkonium 554 Studies with onia at LHCb 20m LHCb results will be presented of studies made of the production of $c\bar{c}$ and $b\bar{b}$ states in $pp$ collisions at $\sqrt{s} = 7$~TeV. The range and precision of these measurements will be invaluable in discriminating between theoretical models. Results and prospects will also be shown for so-called exotics, such as the X(3872). Speaker: Luigi Li Gioi (Lab. de Physique Corpusculaire (LPC)-Inst. Nat. Phys. Nucl. et) Measurement of Quarkonia production at 7 TeV with the CMS experiment 20m The measurement of J/psi and Υ production cross section in proton-proton collisions at √s = 7 TeV is presented using a data sample collected with the CMS detector at the LHC. We also report the measurement of the ratio of X(3872) and psi(2S) signal yields. Speaker: Yu Zheng (Purdue University) New Measurements of Production and Polarization of Heavy Mesons at CDF 20m We present a new measurement of the Upsilon(1S), (2S), and (3S) polarization in dimuon events from p-pbar collisions at 1.96 TeV, using the CDF detector at the Tevatron. The measurement is conducted exploiting the full 3-dimensional angular distributions over a pT range of 2-40 GeV/c, based on data comprising an integrated luminosity of 6.0 fb-1. We also report the first measurement of production cross-section of low-pt D0 mesons at the Tevatron.. Speaker: Niharika Ranjan (Purdue University) Heavy Ion Physics/Hot and Dense QCD 552 B Conveners: Derek Teaney (Stony Brook University), Prof. Olga Evdokimov (UIC) Deconfinement and chiral transition in QCD at finite temperature 30m I am going to discuss new lattice results on the deconfinement and chiral aspects of the transition in QCD at non-zero temperature. I will report on calculations performed using the Improved Staggered Quark action on Nt=6, 8 and 12 lattices. I will show continuum extrapolation for several quantities that are discussed in connection with the transition at non-zero temperature as well as the determination of the chiral transition temperature in the continuum limit. Finally I will discuss new findings for the equation of state. Speaker: Peter Petreczky (BNL) QCD Critical Point and Event-by-event Fluctuations 25m QCD critical point is a singularity on the QCD phase diagram with distinct signatures which make possible its discovery in heavy-ion collisions. I shall describe the characteristics of the non-monotonous behavior of observables measuring the magnitude and non-Gaussianity of event-by-event fluctuations as a function of the beam energy in the presence of the QCD critical point. I shall discuss implications for the RHIC Beam Energy Scan and what we can learn from recent data. Speaker: Prof. Misha Stephanov (UIC) Identified Hadron Production from the RHIC Beam Energy Scan Program in the STAR experiment 20m The current focus at RHIC is the Beam Energy Scan (BES) program to study the QCD phase diagram --- temperature ($T$) vs. baryon chemical potential ($\mu_B$). The BES program aims to verify some predictions from QCD: that a cross-over occurs at $\mu_B$ = 0, and that there exists a first-order phase transition at large $\mu_B$ and a critical point at an intermediate $\mu_B$. The spectra and ratios of produced particles can be used to extract $T$ and $\mu_B$ in different energies and system sizes. The Solenoidal Tracker At RHIC (STAR) experiment has collected data for Au+Au collisions at $\sqrt{s_{NN}}=$ 7.7 GeV, 11.5 GeV, and 39 GeV in year 2010. One of the advantages during the BES program was the enhanced particle identification with availability of full Time-Of-Flight detector. In addition, STAR collected Cu+Cu collisions at 22.4 GeV in year 2005. We present mid-rapidity spectra ($p_{T}$ or $m_{T}-m_{0}$), rapidity density, average transverse mass, and particle ratios for identified hadrons from the STAR experiment. The centrality and transverse momentum dependence of the particle yields and ratios will be compared to existing data at lower and higher beam energies and to various transport models like AMPT and UrQMD. Collision dynamics are studied systematically in the framework of chemical and kinetic freeze-out and their properties extracted from the particle ratios and spectra. Speaker: Dr lokesh kumar (Kent State University) Recent Results of Fluctuation and Correlation Studies from the QCD Critical Point Search at RHIC 20m Enhanced fluctuations and correlations have been observed in the phase transitions of many systems. Their appearance at the predicted QCD phase transition (especially near the expected critical point) may provide insight into the nature of the phase transition. Recent results from the QCD Critical Point Search at RHIC will be presented, with a focus on particle ratio (K/$\pi$, p/$\pi$, and $K/p$) fluctuations and their comparison to previous measurements and theoretical predictions. Speaker: Dr Terence John Tarnowsky (Michigan State University) Higgs Physics 555 Conveners: Dr Laura Reina (Florida State University), Prof. Wade Cameron Fisher (Michigan State University) Search for the Standard Model Higgs boson in H->ZZ decay channels with the ATLAS detector 20m The SM Higgs boson in the medium and high mass ranges has a large branching ratio for decays to a pair of neutral weak bosons. Three decay modes of the Z boson pair have been explored by ATLAS. One Z is typically produced on-shell, which can be tagged using leptonic decay products. The decay of the second Z(*) leads to three independent search channels: llqq, llvv and llll. Background compositions and topologies differ among these channels. The llqq search can use jet information to reduce top and Z+jets backgrounds; the llvv search requires a good understanding of missing transverse energy, while the 4 lepton ('golden') channel is almost background-free and, owing to its low production rate, needs excellent lepton efficiencies in order to be sensitive. This talk summarizes results in all three H->ZZ channels using data collected in 2011. Speaker: G Carrillo Montoya (Department of Physics-University of Wisconsin) Search for the Higgs boson in leptonic ZZ* and semileptonic WW* decays in proton-antiproton collisions at 1.96 TeV 20m We present a search for the Standard Model Higgs boson produced via the H->WW*->lvjj and H->ZZ*->4l processes at a center-of-mass energy of 1.96 TeV using up to 8.5 fb-1 of data collected with the D0 and CDF detectors at the Fermilab Tevatron collider. We search in events with either four charged leptons, or two jets, one charged lepton, and missing transverse energy. The four lepton channel provides a very clean signature, although at the expense of a low cross section time branching ratio. The semi-leptonic H->WW* channel has a relatively larger cross section times branching ratio, but is overcome by the large W+jets background. The procedures used to perform these searches will be discussed. Speaker: Savanna Shaw (MIchigan State University) A Search For The Higgs Boson In H --> ZZ --> 2l 2jet Mode 20m We report on a search for SM Higgs Boson in the mode H --> ZZ--> 2l 2jet conducted by the CMS experiment with the data accumulated during the 2010 & 2011 running of the LHC at sqrt(s) = 7 TeV. Speaker: Ashish Kumar (Department of Physics-Physics Faculty-State University of New Y) A Search For The Higgs Boson In H --> ZZ--> 4l Mode 20m We report on a search for SM Higgs Boson in the mode H --> ZZ--> 4l conducted by the CMS experiment with the data accumulated during the 2010 & 2011 running of the LHC at sqrt(s) = 7 TeV. Speaker: Mario Pelliccioni (Universita degli Studi di Torino-Universita e INFN) Low Energy Searches for Physics Beyond the Standard Model 551 B Conveners: Shufang Su (University of Arisona), William Molzon (University of California, Irvine) Search for a Neutron Electric Dipole Moment at the Paul Scherrer Institut 20m At the new ultracold neutron source at the Paul Scherrer Institut(PSI) a collaboration of 15 European institutions is setting up an experiment to search for the nEDM with improved sensitivity. The same apparatus which provided the present best limit on the nEDM (d < 2.9 x 10-26 ecm 90% CL, Baker et al., Phys. Rev. Lett. 97 (2006) 131801), was moved from the Institut Laue Langevin (ILL) in spring 2009 to PSI. Since then it was thoroughly investigated and several components have been upgraded and improved. Most remarkable are: the HV system, the magnetic field control and demagnetization method, the mercury co-magnetometer, and an additional 12 channel array of scalar cesium magnetometers. In December 2010 we could store first UCN in our apparatus at PSI. This spring the co-magnetometer was running continuously during several weeks for a measurement of the mercury geometric phase, one of the most important systematic effects. In general all subsystems are working. We have ongoing studies improving the understanding of systematic effects. First data taking runs are scheduled for autumn 2011. Expected statistics might be sufficient to improve on the previous result. Two hundred nights of data taking in 2012 and 2013 should increase the sensitivity to dn < 5×10-27 ecm in the case of a null result. Simultaneously the collaboration is developing an entire new apparatus to further gain an order of magnitude in sensitivity O(10-28) from 2015 onwards. Speaker: Dr Philipp Schmidt-Wellenburg (Paul Scherrer Institut) A Search for the Electric Dipole Moment of the Neutron 20m The experimental search for a neutron electric dipole moment could reveal new sources of time-reversal (T) and charge-conservation-and-parity (CP) violation and challenge proposed extensions to the Standard Model. The goal of the present experiment is to improve the measurement sensitivity of the neutron EDM by two orders of magnitude. The physics goals of this experiment remain timely and of unquestioned importance. There is ample reason to expect a nonzero value for the neutron EDM: many theories predict EDM values within the six orders-of-magnitude window between the current limit and the value allowed by the Standard Model. The results of this experiment could make a significant complementary contribution to the search for new physics at the Large Hadron Collider (LHC). The experiment is based on the magnetic-resonance technique of rotating a magnetic dipole in a magnetic field. Polarized neutrons and polarized 3He atoms are confined in a bath of superfluid 4He at a temperature of 450 mK. When placed in an external magnetic field, both the neutron and 3He magnetic dipoles precess in the plane perpendicular to the magnetic field. The neutron EDM is determined from the difference in the precession frequencies of the neutrons and the 3He atoms when a strong electric field is applied either parallel or anti-parallel to the magnetic field. The 3He serves as a volume comagnetometer to minimize magnetic-field systematic effects. Due to shielding effects, 3He should have a negligible electric dipole moment. Improvements over previous experiments arise from an increased electric field due to the excellent dielectric properties of superfluid 4He, an increase in the total number of ultracold neutrons (UCNs) stored, and an increased measurement time due to the longer storage of UCN in the cryogenic container. I will review the present status of the construction of the nEDM experiment and outline its role within the context of the international efforts to measure electric dipole moments. Speaker: Prof. Paul Huffman (North Carolina State University) EDMs and their implications 20m The Electric Dipole Moments (EDMs) provide an unique way of probing CP violations. The search for neutron and atom EDMs has reached very high precision after many decades of effort. I will discuss the implications of the EDM search on the theory of SM, SUSY, as well as the Baryogenesis scenarios. In particular, based on the study of the interplay of EDMs and electroweak Baryogenesis in MSSM, we learn that the wino-driven scenario has been ruled out by current EDM bounds, and the bino-driven scenario is the only viable scenario. With the next generation of EDM experiments that are projected to push the sensitivity by another 2 to 3 orders of magnitude, the parameter space of the bino-driven scenario can also be fully covered. Speaker: Dr Yingchuan Li (Brookhaven National Lab) A proton EDM experiment: most sensitive to CP-violation beyond the SM 20m High intensity polarized proton beams in storage rings make possible the development of an experiment to probe the proton electric dipole moment (EDM) with sensitivity of 10^-29 e-cm. At this level it will be sensitive to new physics at the 3000 TeV and if new physics exists at the LHC scale, it will be sensitive at the sub-micro-radian level of CP-violating phases. The method utilizes an electric storage ring and polarized protons at their magic momentum (0.7 GeV/c) and takes advantage of several years of experience manipulating polarized beams in storage rings. The experimental concepts were scrutinized in two separate and very successful technical reviews, one in December 2009 and one in March 2011. The collaboration is expecting to submit the proton EDM proposal to DOE by the end of June 2011 for CD0. Speaker: Dr Yannis Semertzidis (BNL) Neutrino Physics: Chaired by Jon Urheim 550 Conveners: , Sam Zeller (FNAL) Solar and Atmospheric Neutrino Physics with Super-Kamiokande 20m We present neutrino oscillation results based on data samples from all four phases of the experiment (SK-1 through SK-4) over a 15 year running period. Atmospheric neutrino data spanning 5 orders of magnitude in neutrino energy and 4 orders of magnitude in baseline are used to constrain the mixing parameters of neutrino oscillation as well as search for non-standard effects. Solar neutrino data is also used to constrain mixing parameters and search for evidence of day night differences and spectral distortion. Speaker: Roger Wendell (Duke University) Low Energy Neutrino Astronomy in Super-Kamiokande 20m Super-Kamiokande is sensitive to neutrino interactions between 4 and 100MeV via elastic scattering and inverse beta decay. I will present Super-Kamiokande's ongoing measurements of solar neutrinos and its searches for supernova neutrinos. Speaker: Dr Michael Smy (UCI) MINOS Electron-neutrino Appearance Analysis 15m MINOS is a long-baseline neutrino oscillation experiment which started commissioning in 2005. MINOS has provided many physics opportunities in the past few years. It has made best measurement of $\Delta m^{2}_{32}$ and made the first measurement of $\Delta \bar{m}^{2}_{32}$. MINOS also set the most stringent limit to the fraction of active neutrinos transition to sterile neutrinos. MINOS made attempts to measure $\theta_{13}$ and has obtained comparable limit with the current best limit depending on the CP-violation phase and the neutrino mass hierarchy. With more data and improved analysis technique, MINOS might also set better limit to $\theta_{13}$ within this year. Since MINOS will end soon, MINOS+ has been proposed to run the experiment at high energy mode. Speaker: Dr Xiaobo Huang (Argonne National Laboratory) Search for Electron Neutrino Appearance in MINOS 20m The MINOS Collaboration continues its search for electron neutrino appearance in the NuMI beam at Fermilab. Neutrinos in the beam interact in the Near Detector, located 1 km from the beam source, allowing us to characterize the backgrounds present in our analysis. In particular, we can estimate the number of electron neutrino candidate events we expect to see in the Far Detector (735 km away, in the Soudan mine in northern Minnesota) in the presence or absence of muon neutrino to electron neutrino oscillation. Recent efforts to improve the sensitivity of the analysis, including upgrades to the event identification algorithm and fitting procedure, are discussed, and the latest results from the search are presented. Speaker: Mr Mhair Orchanian (Caltech) Electron Antineutrino Appearance in MINOS 15m The Main Injector Neutrino Oscillation Search (MINOS) is a long-baseline neutrino experiment that utilizes Fermilab's NuMI beam and two steel-scintillator calorimeters. Designed to search for νµ disappearance, MINOS provides an opportunity to study νe appearance as well. Analysis methods developed by the MINOS νe group have facilitated the placement of limits upon the mixing angle associated with νµ to νe oscillations. In addition, the experiment is capable of repeating its analyses using an antineutrino beam. Recent observations of anti-νµ disappearance have motivated supplementary data collection with the antineutrino beam configuration. The benefits of an anti-νe appearance study and MINOS's anti-νe sensitivity will be presented. Speaker: Mr Adam Schreckenberger (University of Minnesota) Perturbative and non-Perturbative QCD 551 A Conveners: Christina Mesropian (The Rockefeller University), Sean Fleming (University of Arizona) Mini-Review: Unravelling Jets at Colliders 30m Jets play an important role in a broad range of collider studies and there have been many recent developments in understanding them. Unravelling their structure enhances our ability to interpret data, search for new physics and develop our understanding of Monte Carlos. This talk reviews some theoretical developments in jet physics and their connection to recent experimental results. Speaker: Dr Saba Zuberi (UC Berkeley) Jet substructure and event shapes at high Q^2 in ATLAS 20m We present results on the measurement of hadronic jet event shapes and jet substructure in proton-proton collisions at sqrt(s) = 7 TeV with the ATLAS detector. These measurements constitute the first dedicated study of hadronic event shapes at high Q^2 in ATLAS. New results are also presented on the measurement of the substructure of these jets and in commissioning the tools for distinguishing the signatures of new boosted massive particles in the hadronic final state. Two ``fat'' jet algorithms are used, along with the filtering jet grooming technique that was pioneered in ATLAS. New jet substructure observables are compared for the first time to data at the LHC. Finally, a sample of candidate boosted top quark events collected in the 2010 data is analyzed in detail for the jet substructure properties of hadronic ``top-jets'' in the final state. Together, these measurements demonstrate not only our excellent understanding of QCD in a new energy regime but open the path to using complex event-level and jet substructure observables in the search for new physics. Speaker: David Miller (SLAC National Accelerator Laboratory) A New Formulation of Analytic, Non-Perturbative, Gauge-and Lorentz-Invariant QCD 20m A simple and previously overlooked choice of one parameter allows the Schwinger/Symanzik Generating Functional of QCD to be re-written in a manifestly gauge-invariant fashion, without the need of Fadeev-Popov insertions. When combined with Fradkin functional representations for the Green's function, G[A], of a quark in an effective color potential A, and the vacuum loop functional L[A], all QCD correlation functions can be represented as Gaussian, functional-linkage operations connecting relevant combinations of G[A] and L[A]. And because the Fradkin representations for those functionals are Gaussian in their dependence on A, the functional-linkage operation can be done exactly, and one then sees that gauge invariance here is achieved by gauge-independence, as the gauge-dependent gluon propagators exactly cancel out everywhere. In this way, the non-perturbative sums over Feynman graphs reduce to an explicit, gauge-independent functional expression. That new, final functional expression now displays a new property we call "Effective Locality" (EL), in which the infinite sum over infinite classes of Feynman graphs corresponds to the exchange of a well-defined "gluon bundle" from specific space-time points on interacting quarks and/or antiquarks. And one then sees that it is no longer possible to continue to treat quarks as ordinary particles, with well defined asymptotic momenta or positions, for they are bound objects whose transverse coordinates can never be measured exactly. Once this necessity of introducing realistic "transverse imprecision" is realized, and introduced into the fundamental Lagrangian, all functional sums become well-defined, and one has an analytic way of obtaining physical information. It should be noted that such progress is possible because the Fradkin representations are Potential Theory constructs, with reasonable approximations in different physical situations; e.g., at high energies, G[A] simplifies to a Bloch-Nordsieck/eikonal form. With those simplifications, and the remarkable property of EL, we have been able to calculate eikonal amplitudes for quark-antiquark scattering, and for three-quark scattering, and to extract from these eikonal functions the form of binding potentials that produce hadrons. And, most interesting, for the first time we can exhibit a mechanism which leads to effective Yukawa scattering between nucleons, including a scattering potential which becomes negative, as needed to make a deuteron from a proton and a neutron. This, to our knowledge, and for the first time ever, is Nuclear Physics from basic, realistic QCD. This work, by myself (HMF), French colleagues Grandou and Gabellini (of the Universite de Nice), and my ex-Brown student Ming Sheu, is barely 18 months old at this writing; and there are many problems remaining to be studied, such as non-perturbative renormalization theory, the quark-gluon plasma, and - indeed - all of Nuclear Physics. On the basis of what we have been able to derive up to this point, we believe that this approach to analytic QCD calculations will, in the future, become extremely useful. It should be noted that the above remarks are a description of "textbook" QCD, with one type of quark and the massless gluons of SU(3); flavors and electroweak effects, as well as spin and angular momentum dependences are to be added in later on. Speaker: Herbert Fried (Brown University) The Rapidity Renormalization Group 20m We introduce a systematic approach for the resummation of perturbative series which involve large logarithms not only due to large invariant mass ratios but large rapidities as well. Series of this form can appear in a variety of gauge theory observables. The formalism is utilized to calculate the jet broadening event shape and and transverse momentum ($p_T$) distributions in a systematic fashion to next to leading logarithmic order. An operator definition of the factorized cross section as well as a closed form of the next-to leading log cross section are presented. Speaker: Dr Jui-Yu Chiu (Carnegie Mellon University) Top Quark Physics: Chaired by Zack Sullivan 553 Conveners: Reinhard Schwienhorst (Michigan State University), Dr Zack Sullivan (Argonne National Laboratory) Mini-review of the top quark physics 30m I will present a theoretical overview of the top quark physics. I will discuss the status of theoretical description of various processes at hadron colliders that are used to extract the information about dynamics of top quarks, and their quantum numbers such mass, spin and charge. Speaker: Kirill Melnikov (Johns Hopkins University) Measurements of the top production cross section and properties with the D0 detector 20m We present measurements of the inclusive top quark pair production cross section in ppbar collisions at sqrt(s)=1.96 TeV utilizing data corresponding to an integrated luminosity of 5.3 fb-1 collected with the D0 detector at the Fermilab Tevatron collider. We both final states with one charged lepton, at least two jets and missing transverse energy, or final states with two charged leptons, at least one jet and missing transverse energy. We exploit both the kinematic features of the final states and the identification of jets originating from b-quarks to separate the ttbar production signal from backgrounds and obtain measurements of the cross sections which agree with the predictions of the standard model. We then investigate the ttbar final state and obtain a measurement of the top quark branching fractions into b-quarks. Finally we use the sample of ttbar events to investiage the color representation of the hadronically decaying W boson in the ttbar events, using a new calorimeter-based vectorial variable, the "jet pull", sensitive to the color-flow structure of the final state. Speaker: Liang Li (University of California Riverside) Top quark physics results using CMS data at 7 TeV 20m We give an overview of the most recent results on top quark properties and interactions, obtained using data collected with the CMS experiment during the years 2010-2011 at 7 TeV center-of-mass energy. Measurements are presented for both the inclusive top pair production cross section, using the dilepton, lepton+jets, hadronic and tau channels, as well as for various differential cross sections. The results are compared with standard model predictions and allow to search for possible presence of new physics. In particular, measurements of the top pair invariant mass distribution are used to search for new particles decaying to top pairs. We extract the mass of the top quark using various methods, including indirect constraints from the measured cross section. We measure total and differential cross sections for the electroweak production of single top quarks in both t- and tW-channels, also useful for constraining the CKM matrix element Vtb. Further results include measurements of the W helicity in top decays and the top pair charge asymmetry. Speaker: Prof. Karl Ecklund (Rice University) Measurement of the top-pair production cross-section at ATLAS 20m We present measurements of the top-quark pair-production cross section in proton-proton collisions at sqrt(s) = 7 TeV with the ATLAS detector at the Large Hadron Collider. The cross sections are measured in the lepton+jets channels. Speaker: Dr M Saleem (University of Oklahoma, USA) Coffee Break 30m Experimental program at Accelerator Test Facility 30m Few representative experimental results from a 20 year old history of the dedicated advanced accelerator R&D user facility will start the presentation. Evolution of the facility, its current capabilities and experimental program will be discussed in details. Monoenergetic Ion beam generation in laser plasma interaction and observation of Coherent Synchrotron Radiation suppression with shielding plates in the bending magnet will be used to illustrate recent results at ATF. Experimental plans and future upgrades will be also discussed. Speaker: Dr vitaly yakimenko (bnl) Do optical-scale structures make suitable accelerators for colliders? 30m Of the various advanced accelerator schemes that promise high accelerating gradients, optical-scale structures offer a distinct set of performance parameters along with their own challenges. In addition to the promise of an order of magnitude improvement in accelerating gradients (to ~GV/m) over conventional structures, these devices produce low charge, femto- to atto-second bunches at very high repetition rates (MHz-GHz). The implications for colliders are significant: beam disruption and background beamstraulung might be significantly reduced, but the bunch format may require changes in detectors and trigger systems. Some variants of the optical-scale structures can support flat (high-aspect ratio) beams, which may also be advantageous in a collider. In order to realized such a collider, these devices must demonstrate very high wall-plug efficiency, high reliability and long lifetimes. In this talk, I will attempt to answer the question posed in the title. I will review the present start-of-the-art in optical-scale structures and speculate on the mid- and long-term challenges to be overcome in order to prove their applicability to high-energy physics. Speaker: Dr Gil Travish (UCLA) New Methods of Particle Collimation in Colliders 30m The hollow electron beam collimator is a novel concept of controlled halo removal for intense high-energy beams in storage rings and colliders. It is based on the interaction of the circulating beam with a 5-keV, magnetically confined, pulsed hollow electron beam in a 2-m-long section of the ring. The electrons enclose the circulating beam, kicking halo particles transversely and leaving the beam core unperturbed. By acting as a tunable diffusion enhancer and not as a hard aperture limitation, the hollow electron beam collimator extends conventional collimation systems beyond the intensity limits imposed by tolerable losses. The concept was tested experimentally at the Fermilab Tevatron proton-antiproton collider. Results on the collimation of 980-GeV antiprotons are presented. Speaker: Giulio Stancari (Fermi National Accelerator Laboratory) Novel Beam Diagnostics for Future HEP facilities 30m To meet the energy and luminosity requirements of future HEP machines, advances are required in accelerator instrumentation and technology in many diverse areas, such as: acceleration, component alignment and stability, fast timing instrumentation and optics, photocathodes, pulsed power components, photon detectors, halo monitors, collimators, lasers, insertion devices, noninvasive profile monitors, high resolution position monitors, trapped ion diagnostics, and feedback systems. We will discuss the prospects for these advanced diagnostic techniques with an emphasis on high impact technologies that can significantly enhance the performance and scientific output of future machines. Speaker: Dr John Byrd (LBNL) Searches for Supersymmetry in Hadronic Final States with the CMS Detector at the LHC 20m We present the results of searches for Supersymmetry in all-hadronic final states with jets and missing transverse energy, including the cases of jets identified as b-jets, the decay products of top quarks and hadronically decaying tau leptons. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data-driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in a range of Supersymmetric scenarios. Speaker: Sudarshan Paramesvaran (University of California Riverside) Search for supersymmetry in final state containing isolated electrons and muons, jets, and missing transverse momentum from sqrt(s)=7 TeV proton-proton collisions at the LHC 20m We report on searches for supersymmetry in events with one, two or multi-lepton final states with the 2011 data from the ATLAS experiment. In case of no excess observed a 95% CL upper limit is set for squark and gluino masses for different signal models. A 95% CL limit on the cross section times branching ratios times efficiency is set for different final states under study. Speaker: Tapas Sarangi (University of Wisconsin) Searches for Supersymmetry in Final States with Leptons with the CMS detector at the LHC 20m We present the results of searches for Supersymmetry in various topologies that lead to one or more isolated leptons, jets, and missing transverse energy in the final state. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data-driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in a range of Supersymmetric scenarios. Speaker: Benjamin Henry Hooberman (Fermi National Accelerator Lab. (Fermilab)) Search for Physics Beyond the Standard Model in Opposite-Sign Dilepton Events at CMS 20m The results of searches for Supersymmetry in events with two opposite-sign isolated leptons, hadronic jets, and missing transverse energy in the final state are presented. The searches use pp collisions at 7 TeV collected in 2011 by the CMS experiment. Speaker: Derek Michael Barge (Physics Department-Univ. of California Santa Barbara) Search for New Physics in pp Collisions at √s = 7 TeV in Final States with Missing Transverse Energy and Heavy Flavor 20m Results are presented of a search for new physics in events with large missing transverse energy and heavy flavor jet candidates in √s=7 TeV proton-proton collisions with the ATLAS detector at the Large Hadron Collider. Several signal regions corresponding to different regions of phase space are examined. The results are interpreted in the context of phenomenological simplified new physics models as well as universal new physics models such as mSUGRA. Speaker: Bart Butler (SLAC) Interpretation of SUSY Searches in ATLAS with Simplified Models 20m We present the status of interpretations of Supersymmetry searches in ATLAS using simplified models. Such models allow a systematic scan through the phase space in the sparticle mass plane, and in the corresponding final state kinematics. Models at various levels of simplification have been studied in ATLAS. The results can be extrapolated to more general new physics models which lead to the same event topology with similar mass hierarchies. Speaker: Hideki Okawa (UC Irvine) CP-Violation 551 B Conveners: Enrico Lunghi (Indiana University), Jim Olsen (Princeton University) CP violation - minireview 40m I will review the present status of CP violating observables in low energy observables and the progress in their precision that can be expected for in the future. I will discuss what deviations from Standard Model can potentially teach us about new physics and also discuss recent hints of non-Standard Model CP violating sources in heavy quark transitions. Speaker: Jure Zupan (Unknown) Anomalous like-sign dimuon charge asymmetry at D0 28m We present an improved measurement of the charge asymmetry $A$ of like-sign dimuon events in 9 fb$^{-1}$ of $p\overline{p}$ collisions recorded with the D0 detector at a center-of-mass energy $\sqrt{s} = 1.96$ TeV at the Fermilab Tevatron collider. From $A$, we extract the like-sign dimuon charge asymmetry in semileptonic $b$-hadron decays. We also study the dependence of charge asymmetry on muon impact parameter. Additional constraints on the $CP$ violation in the $B$ meson sector are also derived from a measurement of the flavor-specific semileptonic asymmetry in the $B^0_d\to\mu D+X$ channel. Speaker: Bruce Hoeneisen (Universidad San Francisco de Quito) Probing CP violating anomalous top-quark couplings at Hadron Colliders 24m In this talk, I will discuss T-odd correlations induced by CP violating anomalous top-quark couplings at both production and decay level in the context of the Tevatron and the Large hadron collider. We will also show that by simply making use of four momenta of the top decay products it is possible to isolate such effects. With specific examples of top decay modes the experimental sensitivities for the aforementioned coupling will also be discussed. Speaker: Dr Sudhir Gupta (Iowa State University) Detector Technology and R&D: Chaired by Kevin Pitts 556 Performance of the b-tagging algorithms with an upgraded CMS detector 15m Identification of jets originating from b quarks (b-tagging) will likely continue to be a key element of many physics analyses at the upgraded HL-LHC where much higher pileup can significantly reduce the performance. An upgrade of the CMS pixel detector proposed for the Phase 1 HL-LHC should enable CMS to maintain the current level of b-tagging performance even in the presence of very high pileup. Results of Monte Carlo simulation studies with an upgrade CMS pixel detector will be presented for tracking and b-tagging performance and compared to that for the current CMS detector. Speaker: Eric Charles Brownson (Department of Physics and Astronomy-Vanderbilt University) Belle II Detector: Status and Proposed US Contribution 15m Over the course of the last decade, the Belle detector at the KEKB collider has collected over 1 ab^{-1} of integrated luminosity, allowing for a number of precision measurements of the Standard Model, including confirmation of the Kobayashi-Maskawa mechanism of CP violation. In June of 2010, KEKB and Belle were shut down to begin upgrading both the accelerator and detector. The increased luminosity of the new accelerator, Super-KEKB, coupled with significant improvements in background rejection and sensitivity of the upgraded detector, Belle II, will ultimately provide a dataset approximately 50 times larger than that obtained with Belle. The US groups in Belle II have chosen to focus their efforts on areas of the detector that will have high impact on the physics and that match their expertise and experience: high precision particle identification (especially at higher momenta), muon/KL identification and monitoring of the electron-positron beams - during commissioning and operation. In this presentation, we review the plans and status of the SuperKEKB/Belle II upgrade. Additionally, we describe the proposed US contributions to Belle II which take advantage of –and leverage– US expertise in detector and electronics design, accelerator instrumentation, and existing US facilities. Speaker: James Fast (Pacific Northwest National Laboratory) ATLAS pixel detector upgrades 15m The ATLAS experiment is building an "Insertable B-Layer" (IBL) pixel detector to be installed on a replacement beam pipe in 2013. This detector is using the new FE-I4 pixel readout chip recently developed. The IBL will fit inside and not alter the existing ATLAS pixel detector. However, the possibility is being studied to replace the pixel whole detector in 2017 with a lower mass, higher performance instrument based on the FE-I4 chip and new mechanical structures. Speaker: Mauricio Garcia-Sciveres (LBNL) Study of the readout chip and silicon sensor degradation for the CMS pixel upgrade 15m Hybrid silicon pixel detectors are currently used in the innermost tracking system of the Compact Muon Solenoid (CMS) experiment. Radiation tolerance up to fluences expected for a few years of running of Large Hadron Collider (LHC) has already been proved, although some degradation of the part of the silicon detector closer to the interaction point is expected. During the LHC upgrade phases, the level of dose foreseen for the silicon pixel detector will be much higher. To face this aspect, dedicated irradiation tests with fluences above $\mathcal{O (10^{15})$~n$_{\textrm{eq}}$/cm$^2$ have been performed on the silicon sensor and readout chip. Changes in the operation of the sensor and readout chip as a function of the fluence are presented. The charge collection efficiency has been studied: partial recovery of the detector efficiency can be achieved by operating the detectors in a controlled environment and at higher bias voltage. Speaker: gemma Tinti (Department of Physics and Astronomy-University of Kansas) Chronopixel Detector Development for Vertex Detectors for future e+e- Colliders 15m Studies carried out in the U.S., Europe, and Asia, have demonstrated the power of a pixel vertex detector in physics investigations at a future high energy linear collider. At one time, silicon CCD's (Charged Coupled Devices) seemed like the detector elements of choice for vertex detectors for future Linear e+ e- Colliders. However, with the decision for a cold TESLA-like superconducting technology for the future International Linear Collider (ILC), the usefulness of CCD's for vertex detection has become problematical. The time structure of this cold technology is such that it necessitates an extremely fast readout of the vertex detector elements and thus CCD's as we know them will not be useful. New CCD architectures are under development but have yet to achieve the required performance. For these reasons there is an increased importance on the development of Monolithic CMOS pixel detectors that allow extremely fast non sequential readout of only those pixels that have hits in them. This feature significantly decreases the readout time required. Recognizing the potential of a Monolithic CMOS detector, we initiated an R&D effort to develop such devices3. Another important feature of our present conceptual design for these CMOS detectors is the possibility of putting a time stamp on each hit with sufficient precision to assign each hit to a particular bunch crossing. This significantly reduces the effective backgrounds in that in the reconstruction of any particular event of interest we only need to consider those hits in the vertex detectors that come from the same bunch crossing. The current Chronopixel design is for chips up to 12.5 cm x 2.0 cm in size with a single layer of 10 µ m x 10 µ m charge sensitive pixels. Each pixel has its own electronics under it, but both the sensitive layer and the electronics are made of one piece of silicon (monolithic CMOS) which can be thinned to a total thickness of 50 to 100 µ m, with no need for indium bump bonds. The electronics for each pixel will detect hits above an adjustable threshold. For each hit the time of the hit is stored in each pixel, up to a total of four different hit times per pixel, with sufficient precision to assign each hit to a particular beam crossing (thus the name "chronopixels" for this device). Hits will be accumulated for the 2820 beam crossing of a bunch train and the chip is read out during the 200 millisec gap between bunch trains. There is sufficient intelligence in each pixel so that only pixels with one or more hits are read out, with the x,y coordinates and the time t for each hit. With 10 micron size pixels we do not need analog information to reach a 3 to 4 micron precision so at the present we plan on digital read out, considerably simplifying the read out electronics. We have developed a design, in collaboration with SARNOFF Research Labs of Princeton,N.J., of the Chronopixel devices that satisfy the requirements of the proposed ILC. The detailed design has been completed and SARNOFF has fabricated the first set of prototype devices. We have designed and built, with the help of SLAC, the electronics to test these prototypes. We have completed the test of the first prototypes. We found that they mostly work as designed but have some design flaws. In consultation with SARNOFF we defined the parameters for the second set of prototypes, correcting the flaws of the first prototype and further improving the design,The detailed design of the second prototype is now in progress at SARNOFF and we expect the fabrication of the second prototype to be completed later this year. The design of these Chronopixel detectors, the results of the tests of the first prototypes, and the design of the second prototype, will be presented. Speaker: Prof. Charles Baltay (Yale University) Studies of VCSEL failures in the optical readout systems of the ATLAS silicon trackers and Liquid Argon calorimeters 15m The readout systems for the ATLAS silicon trackers and liquid argon calorimeters utilize vertical-cavity surface-emitting laser (VCSEL) diodes to communicate between on and off detector readout components. A number of these VCSEL devices have ceased to function well before their expected lifetime. We summarize the failure history and present what has been learned so far about the possible causes, and present lifetime projections for the devices that have not failed. Speaker: Mark Cooke (LBNL) Education and Outreach 551 A Conveners: Gregory Snow (University of Nebraska), Michael Barnett (Lawrence Berkeley National Lab) Education and Public Outreach Activities of the Laser Interferometer Gravitational-Wave Observatory 15m Gravitational waves are produced by some of the most energetic and dramatic phenomena in the Universe: Black holes, neutron stars and supernovae. As powerful as they are are at their sources, gravitational waves are incredibly elusive by the time they reach the Earth. Although gravitational waves were predicted almost a century ago (as a consequence of General Relativity), they still have not been directly detected. In the past few years, the Laser Interferometer Gravitational-wave Observatory (LIGO) and its international partners have been hunting for gravitational waves. The initial LIGO instruments were among the most sensitive scientific instruments on the planet. Their sensitivity will achieve further significant improvement with the current construction of Advanced LIGO. Direct detection of gravitational waves will allow scientists to explore the death throes of stars, the origin of dark energy and the nature of space-time in a way humans never have before. LIGO technology will push the frontiers of science and engineering in many areas, from lasers and materials science to high performance computing. The nascent field of gravitational-wave astronomy and the LIGO project offer many opportunities for effective and inspirational astronomy and physics outreach, and provide a powerful showcase for the attractions and challenges of a career in science and engineering. In this talk we describe the extensive program of public outreach and education activities already undertaken by the LIGO Scientific Collaboration - from traveling exhibits, to student field trips, and more. We will also talk about a number of special events which are being planned for the next few years. Speaker: Dr Marco Cavaglia (University of Mississippi) Getting Science Beyond the Research Community: Examples of Education and Outreach from the IceCube Project 15m The combination of cutting-edge discovery science with the exotic Antarctic environment provides an ideal vehicle to excite and engage a wide audience. Examples of how the International IceCube Collaboration has brought the Universe to the classroom and the general public via the South Pole will be presented. Speaker: Dr James Madsen (University of Wisconsin River Falls) Early education activities at the Sanford Underground Laboratory/DUSEL 15m The Deep Underground Science and Engineering Laboratory (DUSEL) – proposed for the site of the former Homestake Goldmine in Lead, SD – will provide the facility and infrastructure for scientists to study some of the most compelling questions about the history and fate of our universe through its major experiments searching for direct evidence of dark matter and exploring the nature of neutrinos. The Sanford Underground Laboratory at Homestake - operated by the South Dakota Science and Technology Authority - is currently preparing the site and hosting early science and education activities. The Sanford Center for Science Education (SCSE) is in the planning stages as the education component of DUSEL. The mission of the SCSE is to draw upon the science and engineering of DUSEL, its human resources, its unique facility and its setting within the Black Hills to inspire and prepare future generations of scientists, engineers, and science educators. As work proceeds towards design of the building, institution, and the programs and exhibits therein, early work has progressed towards establishing programs that build capacity and partnerships and begin to prototype innovative educational programming and exhibits to meet its educational vision. As the Sanford Lab/DUSEL education team explores innovative ways to convey the excitement of DUSEL physics to audiences of all ages, successes and challenges from the first two years of early educational programming will be highlighted in this talk. Examples include: For Students: • The Davis-Bahcall Scholars Program • Development of a conceptual modern physics course for K-12 For educators • Professional development workshops For the public • Deep science lecture series • Neutrino Days Cultural connections • Finding intersections between American Indian ways of knowing and modern cosmology Speaker: Margaret Norris (Sanford Underground Laboratory) DPF_0811.pdf DPF_0811.pptx Education and Public Outreach of the Pierre Auger Observatory 15m The scale and scope of the physics studied at the Auger Observatory offer significant opportunities for original outreach work. Education, outreach and public relations of the Auger collaboration are coordinated in a separate task whose goals are to encourage and support a wide range of education and outreach efforts that link schools and the public with the Auger scientists and the science of cosmic rays, particle physics, and associated technologies. The presentation will focus on the impact of the collaboration in Mendoza Province, Argentina, as: the Auger Visitor Center in Malargüe that has hosted over 60,000 visitors since 2001 and a third collaboration-sponsored science fair held on the Observatory campus in November 2010. The Rural Schools Program, which is run by Observatory staff and which brings cosmic-ray science and infrastructure improvements to remote schools, will be highlighted. Numerous online resources, video documentaries, and animations of extensive air showers have been created for wide public release. Increasingly, collaborators draw on these resources to develop Auger related displays and outreach events at their institutions and in public settings to disseminate the science and successes of the Observatory worldwide. Speaker: Prof. Gregory Snow (University of Nebraska) HiSPARC: On the interface between outreach and scientific research 15m The HiSPARC project is a high school cosmic ray project that originated in the Netherlands. The aim of this project is to have high school students participate in building and running a scientific project, thereby increasing their enthusiasm for science in general. We are experimenting with different detector configurations, performing a calibration on the KASCADE site, and exploring the physics of a distributed setup. At the same time, we are moving cosmic ray physics in the standard curriculum of the Dutch high schools. Furthermore, there is an international expansion. Recently, a HiSPARC cluster was realized in Aarhus, Denmark, and work is underway on clusters in the United Kingdom. Speaker: Dr Charles Timmermans (Radboud University, Nijmegen, Netherlands) MARIACHI: Science by Scientists, Teachers and Students 15m The MARIACHI initiative involves a community with diverse academic backgrounds to explore forefront science. We focus on the study of cosmic rays. Our flagship theme has been the development of a new technology for the detection of cosmic rays, namely forward scattering radar. Over the years many other research subjects have been added to the list of our interests and they are now in various stages of development. We believe that by creating an environment where teachers and students can work together in the pursuit of science, each can learn about science first hand. In this presentation we will give an overview of the experiment and lessons learned. In particular we will discuss how large numbers of students get involved in the data analysis from the experiment. Impressions about the educational impacts will be given. Speaker: Dr Helio Takai (Physics Department, Brookhaven National Laboratory) The Multimedia Project Quarked! 15m Can exposure to fundamental ideas about the nature of matter help motivate children in math and science and support the development of their understanding of these ideas later? Physicists, designers, and museum educators at the University of Kansas created the Quarked!™ Adventures in the subatomic Universe project to provide an opportunity for youth to explore the subatomic world in a fun and user friendly way. The project components include a website (www.quarked.org) and facilitated hands-on shows. These are described and assessment results are presented. Questions addressed include the following. Can you engage elementary and middle school aged children with concepts related to particle physics? Can young children make sense of something they can't directly see? Do teachers think the material is relevant to their students? Speaker: Prof. Alice Bean (Department of Physics and Astronomy) Plain English Summaries of Experimental Results 15m Press releases are issued by labs when a major result such as a discovery is announced. More commonly, we write papers that are not worthy of a press release. Nonetheless, many in the public are quite interested to see progress in our experiments. Tevatron experiments have pioneered plain English summaries of experimental papers, and the concept is spreading to the LHC experiments. It is difficult to write a good plain English summary. We should develop this art-form further. Speaker: Michael Barnett (Lawrence Berkeley National Lab) Conveners: Prof. Doreen Wackeroth (SUNY at Buffalo), Prof. Sridhara Dasu (University of Wisconsin) Z boson property at Tevatron: angular coefficients and Afb of Drell-Yan process 20m We report on the first measurement of the angular distribution of final state lepton and also the forward and backward asymmetry (Afb) which is sensitive to Weinberg angle in $p\bar{p} \to \gamma^{*}/Z \to \ell^{+}\ell^{-} + X$ events produced at $\sqrt{s}=1.96 TeV$. The data sample collected by the CDF II detector. The angular distributions are studied as a function of the transverse momentum of the lepton pair. The Lam-Tung relation which is only valid for a spin-1 description of the gluon is also tested. Speaker: Jiyeon Han (University of Rochester) Studies of Z/gamma* differential cross sections in ppbar collisions with the D0 detector 20m We use up to 7.3 fb-1 of ppbar collisions collected with the D0 detector study different differential distributions for Z/gamma* produced in ppbar collisions at the Tevatron collider. In one study we investigate the transverse momentum distribution of the Z/gamma* boson by using a novel observable that has reduced sensitivity to the effects of experimental resolution and efficiency, allowing detailed investigations of QCD predictions for the dependence of the Z boson transverse momentum on its rapidity. In a second study we investigate the angular distribution of the Z/gamma* decay products as a function of their invariant mass and derive measurements of the electroweak mixing angle and of the Z-light quark couplings. Speaker: Rafael Lopes de Sa (Stony Brook University) Measurement of W and Z boson production rate and asymmetry at CMS 20m We report the measurements of the rates and asymmetries of inclusive and differential production of W and Z vector bosons in pp collisions at 7 TeV c.m. energy at the CMS detector. Speaker: Ji Yeon Han (University of Rochester) Rates of Jets Produced in Association with W and Z Bosons 20m We present a study of jets produced in association with vector bosons production in pp collisions at center-of-mass energy of 7 TeV using the full CMS 2010 data set, corresponding to 36 pb−1. The transverse energy distribution of the reconstructed leading jets is measured and compared to theoretical expectations. The jet multiplicity distributions are efficiency corrected and unfolded . The ratios of multiplicities, sigma(V+n+1)/sigma(V+n) and sigma(W +n)/sigma(Z +n) where n stands for n jets, are also presented. Speaker: Kira Suzanne Grogg (Department of Physics-University of Wisconsin) A Measurement of the Ratio of the W+ 1 Jet to Z + 1 Jet Cross Sections with ATLAS 20m The measurement of hadronic activity recoiling against W and Z vector bosons provides an important test of perturbative QCD, as well as a method of searching for new physics in a model independent fashion. We present a study of the cross-section ratio for the production of W and Z gauge bosons in association with exactly one jet R-jets= (W+1jet)/(Z+1jet), in pp collisions at sqrt(s) = 7 TeV. The study is performed in the electron and muon channels with data collected with the ATLAS detector at the LHC. The ratio R-jets is studied as a function of the cumulative transverse momentum pT distribution of the jet. Residual systematic uncertainties are parameterized in the same pT distribution. This result can be compared to NLO pQCD calculations and the prediction from LO matrix element + parton shower generators. Speaker: Mr Andrew Robert Meade (University of Massachusetts) Hydrodynamic flow in Pb+Pb collisions observed via azimuthal angle correlations of charged hadrons 25m Azimuthal angle correlations of charged hadrons were measured in $\sqrt s_{NN}$ = 2.76 TeV Pb+Pb collisions by the CMS experiment. The distributions exhibit anisotropies that are correlated with the event-by-event orientation of the reaction plane. Several methods were employed to extract the strength of the signal: the event-plane, cumulant and Lee-Yang Zeros methods. These methods have different sensitivity to correlations that are not caused by the collective motion in the system (non-flow correlations due to jets, resonance decays, and quantum correlations). The second Fourier coefficient of the charged hadron azimuthal distributions was measured as a function of transverse momentum, pseudorapidity and centrality in a broad kinematic range: $0.3 < p_T < 12.0$ GeV/c, $|\eta| < 2.4)$, as a function of collision centrality. In addition, first results on odd Fourier components will be presented and their connection to the hydrodynamic medium will be discussed. Speaker: Charles Felix Maguire (Department of Physics and Astronomy-Vanderbilt University) Conformal hydrodynamics in Minkowski and de Sitter spacetimes 25m I will show how to generate non-trivial analytic solutions to the conformally invariant, relativistic fluid dynamic equations by appealing to the Weyl covariance of the stress tensor. The technique I will present recasts the relativistic conformally invariant Navier-Stokes equations in four-dimensional Minkowski space as a static flow in three-dimensional de Sitter space times a line. The solution obtained can be thought of as a generalization of Bjorken flow. The simplicity of the de Sitter form of the flow enables a study of second order viscous corrections and linearized perturbations. Speaker: Amos Yarom (Princeton University) Measurement of elliptic and higher order flow harmonics at $\sqrt{s_{NN}}=2.76TeV$ Pb+Pb collisions with the ATLAS Detector. 25m The flow harmonics $v_n$ are important bulk observables in heavy ion collisions. They contain information about the initial geometry as well as the transport properties of the medium produced in heavy ion collisions. We present the measurements of flow harmonics $v_2$-$v_6$ using the EP method and two particle correlations method in broad $p_T$, $\eta$ and centrality ranges using the ATLAS detector at LHC. ATLAS recorded, 9ub-1 Pb+Pb data in the 2010 heavy ion run. This large dataset and large detector acceptance :2\pi in azimuth and $\pm 2.5$ units in $\eta$ for charged hadrons, allows for a detailed study of the flow harmonics. The phase space regions where the two methods are consistent and where they disagree will be discussed. We show that the novel structures seen in two particle correlations such as the near and away side ridge as well as the so called "mach-cone" are entirely accounted for by the collective flow. Some interesting scaling relations between the $v_n$ will also be shown. Speaker: Mr Soumya Mohapatra (Department of Physics-State University of New York (SUNY)) Triangularity and Dipole Assymetry in Ideal Hydrodynamics 20m We introduce a cummulant expansion to parameterize possible initial conditions in heavy ion collisions. We show that the cummulant expansion converges and can systematically reporduce the results of the Glauber type initial conditions. At third order in the gradient expansion, the cummulants are described with the triangularity $\llangle r^3 \cos3(\phi - \psi_{1,3} ) \rrangle$, and a dipole assymetry, $\llangle r^3 \cos(\phi- \psi_{1,3}) \rrangle$. We show that the orientation angle of the dipole assymetry $\psi_{1,3}$ has a $20\%$ assymetry out of plane for mid-central collisons. This leads to a small net $v_1$ out of plane. In peripheral and mid-central collisions the orientation angles $\psi_{1,3}$ and $\psi_{3,3}$ are strongly correlated, but this correlation disappears towards central collisions. We study the ideal hydrodynamic response to these cummulants and determine the associated $v_1/\epsilon_1$ and $v_3/\epsilon_3$ for a massless ideal gas. The space time development of $v_1$ and $v_3$ is clarified with figures. These figures show that $v_1$ and $v_3$ develop towards the edge of the nucleus, and consequently the final spectra are more sensitive to the viscous dynamics of freezeout. The hydrodynamic calculations for $v_3$ is provisionally compared to Alver and Roland fit of STAR inclusive two particle correlation functions. Finally, we propose to measure the $v_1$ associated with the dipole assymetry by measuring $\llangle \cos(\phi - 3\Psi_{R3} + 2\Psi_{R2}) \rrangle $ where $\Psi_{R3}$ is an experimental estimate for the triangular event plane while $\Psi_{R2}$ is the usual quadrupole event plane plane estimate. This experimental measurement would provide convincing evidence for the strong correlation between $\psi_{1,3}$ and $\psi_{3,3}$, and by association the hydrodynamic interpretation of two particle correlations at RHIC. Speaker: li Yan (Stony Brook University) Triangular Flow in Relativistic Heavy Ion Collisions in an Event-by-Event Hybrid Approach 25m Triangular flow has been shown to be an interesting new observable to gain insights about the properties of hot and dense strongly interacting matter as it is produced in heavy ion collisions at RHIC and LHC. We will present triangular flow results for Au+Au collisions at the highest RHIC energy calculated in a hybrid approach that includes a non-equilibrium initial evolution and an ideal hydrodynamic expansion with a hadronic afterburner in 3+1 dimensions. By comparing the hybrid approach calculation with a pure transport approach, the influence of viscosity is studied. In addition, the potential of triangular flow for constraining the initial state granularity will be discussed. We compare the results from Au+Au collisions at 200 GeV per nucleon to Pb+Pb at LHC energies and find that the fluctuations/v3 values at LHC are surprisingly similar. Furthermore, the longitudinal long-range correlations of the triangular flow event plane angle are explored for initial conditions from a partonic and a hadronic transport approach. We conclude that longitudinal long-range correlations are not a unique signature for flux tube-like initial conditions, but can also be produced by other mechanisms. Speaker: Hannah Petersen (Duke University) A Search For The Higgs Boson In H --> ZZ--> 2l 2nu Mode 20m We report on a search for SM Higgs Boson in the mode H --> ZZ--> 2l 2nu conducted by the CMS experiment with the data accumulated during the 2010 & 2011 running of the LHC at sqrt(s) = 7 TeV. Speaker: Daniele Trocino (Department of Physics-Northeastern University) Search for the Higgs boson in the H->gamma gamma decays in proton-antiproton collisions at 1.96 TeV 20m Recent searches conducted at the Fermilab Tevatron for the Higgs boson in the diphoton decay channel are reported using 7.0/fb and 8.2/fb of data collected at the CDF and D0 experiments respectively. Although the standard model (SM) branching fraction is small, the diphoton final state is appealing due to better diphoton mass resolution compared with dijet final states. In addition, other models --- such as fermiophobic models where the Higgs does not couple to fermions --- predict much larger branching fractions for the diphoton decay. Here, results are presented for both a SM and fermiophobic Higgs boson as well as a SM search based on a combination of the CDF and D0 analyses. Speaker: Karen Bland (Baylor University) Search for Higgs Boson in Diphoton Final State with the ATLAS Detector 20m Diphoton final state is one of the most sensitive channels to a Higgs search in the low mass region (115GeV-140GeV). The analysis is optimized to search for a Standard Model (SM) Higgs and dominates the SM Higgs combination in the mass range lower than 120GeV. Nevertheless, an inclusive search strategy adopted is also sensitive to find a low mass diphoton resonance that could be predicted by some new physics models. Speaker: Haichen Wang (University of Wisconsin-Madison) Searches for the Higgs boson in VH->VWW->leptons+X decays in p-pbar collisions at sqrt(s)=1.96 TeV 19m We present searches for the standard model Higgs boson produced via the VH->VWW->leptons+X process at a center-of-mass energy of 1.96 TeV with the CDF and D0 detectors at the Fermilab Tevatron Collider. We require either two like charge-signed leptons (electron or muon) or three charged leptons (electron or muon). These channels provide significant sensitivity in the intermediate Higgs boson mass range. Inclusion of data up to 7.3 inverse fb and recent improvements to the sensitivity will be discussed. Speaker: Michael Cooke SUSY QCD Corrections to Higgs-b Production 15m The dominant production mechanism for Standard Model (SM) Higgs boson is gg->h. However, in certain beyond the SM scenarios, Higgs production with bottom quarks can become dominant due to enhanced bottom quark Yukawa coupling. One such model is the Minimal Supersymmetric Standard Model (MSSM) where the bottom Yukawa coupling to Higgs bosons, including the SM-like Higgs, gets significantly modified for large values of tan(beta) [defined as the ratio of the vacuum expectation values of the up and down type Higgs]. In this talk, I focus on one-loop supersymmetric QCD corrections to the subprocess b g->b h which is the leading order (LO) process in five flavor number PDF scheme (5FNS) when one bottom quark in the final state is tagged. In particular, I investigate the validity of the commonly used Delta_b approximation where one rescales the bottom Yukawa in MSSM in order to include large tan(beta) effects. Speaker: Prerit Jaiswal (YITP, Stony Brook University) Neutrino Physics: Chaired by Ed Kearns 550 Conveners: , Edward Kearns (Boston University), Sam Zeller (FNAL) Constraints on non-standard neutrino-matter interactions from MINOS 20m MINOS searches for neutrino oscillations using the disappearance of muon neutrinos between two detectors, over a baseline of 735 km. We recently reported the most precise measurement of neutrino oscillations in the atmospheric sector and the first tagged measurement of antineutrino oscillations. The neutrino mass splitting and mixing angle are measured to be $|\Delta m^{2}| = 2.32_{-0.08}^{+0.12} \times 10^{-3}\,eV^{2}$ and $\sin^{2}2\theta > 0.90$ (90\% C.L.) for an exposure of $7.25 \times 10^{20}$ protons-on-target (PoT). Antineutrino oscillation parameters are measured as $\Delta \overline{m}^{2}=(3.36^{+0.46}_{-0.40}\textrm{(stat.)}\pm0.06\textrm{(syst.)})\times 10^{-3}\,eV^{2}$ and $\sin^{2}(2\overline{\theta})=0.86^{+0.11}_{-0.12}\textrm{(stat.)}\pm0.01\textrm{(syst.)}$ with an exposure of $1.7 \times 10^{20}$ PoT in NuMI antineutrino running mode. We use the apparent difference in neutrino and antineutrino oscillation parameters to constrain non-standard matter interactions which could occur during propagation through the Earth's crust to the far detector. Speaker: Zeynep Isvan (University of Pittsburgh) Neutrino Oscillation Results from T2K 20m The T2K experiment is a long baseline neutrino oscillation experiment designed to directly measure $\nu_{e}$ appearance, thereby providing a measurement of $\theta_{13}$, the last unknown neutrino mixing angle. In addition, T2K will make precision measurements of $\Delta m_{23}^2 $ and $\sin ^2\left( {2\theta _{23} } \right)$ via measurement of $\nu_{\mu}$ disappearance. To achieve these goals, a beam of muon neutrinos is produced at the Japanese Proton Accelerator Research Complex in Tokai, Japan. At a distance of 280 meters from the the beam origin, a set of detectors has been constructed in order to measure the properties of the beam before oscillation. The Super-Kamiokande detector 295 kilometers away serves as the far detector that measures the beam after oscillation. T2K began the first data taking run in January 2010 and concluded in June 2010 and accumulated 0.323$\times $10$^{20}$ POT. The second data taking run began in November 2010 and concluded in March 2011 and accumulated 1.108$\times $10$^{20}$ POT. I will summarize the results of the analysis from these runs. Speaker: Glenn Lopez (Stony Brook University) NOvA: Present and Future 20m NOvA is a next generation neutrino oscillation experiment designed to search for muon neutrino to electron neutrino oscillations by comparing electron neutrino event rates in a Near Detector at Fermilab with the rates observed in a large Far Detector at Ash River, Minnesota, 810 km from Fermilab. The detectors are totally active, segmented, liquid scintillator and the Near Detector is located 14 mrad o the NuMI beam axis. The Far Detector has begun construction and will begin taking data in early 2013. The experiment aims to measure the neutrino mixing angle theta_13 and will push the search for electron neutrino appearance beyond the current limits by more than an order of magnitude. For non-zero theta_13, it is possible for NOvA to observe CP violation in neutrinos and establish the neutrino mass hierarchy. The NOvA prototype near detector on the surface (NDOS) began running at Fermilab in November and registered its first neutrinos from the NuMI beam in December 2010. An overview and current status of the experiment will be presented. Speaker: Dr Gavin Davies (Iowa State University) Daya Bay Neutrino Experiment: Goal, Progress and Schedule 20m The discovery of neutrino oscillation, as a breakthrough in particle physics, motivated the Daya Bay Neutrino Experiment, which is designed to make a precise measurement of the last unknown neutrino mixing angle theta13, with a sensitivity 0.01 for sin^2(2*theta13), using reactor anti-neutrino from 17.4GW Daya Bay Nuclear Power Plant located in Shenzhen, China. This talk will introduce the goal of this experiment including an overall introduction of site and baseline selection, detector optimization, current construction progress and schedule for expected data taking. Speaker: Dr Zhe Wang (Brookhaven National Laboratory) The 600 Ton ICARUS Liquid Argon Experiment at the LNGS 20m We review briefly the R&D effort that went into the construction of the 600 Ton Liquid Argon TPC. The detector is operating very well with electron drift distances near 4m. The detector is exposed to the CNGS beam from CERN and is collecting neutrino events. More than 130 Neutrino events have been observed. Other physics goals include exotic proton decay and sterile neutrinos. ICARUS is also a prototype for much larger multikilo Ton detectors being designed around the world. Speaker: Prof. David Cline (UCLA) Status of the Long-Baseline Neutrino Experiment LBNE 20m LBNE is an experiment being designed to probe the parameters of neutrino mixing accessible through nu_mu to nu_e oscillation measurements at the atmospheric L/E scale. It will consist of a new neutrino beam line and Near Detector complex at Fermilab, and one or more very large Far Detector modules, nominally to be sited underground in the Homestake Mine in South Dakota. In addition to the long-baseline neutrino program, the Far Detector system will enable a variety of other physics studies with unprecedented sensitivity, including searches for nucleon decay and supernova neutrino bursts. We will report on the status of the conceptual design for the experiment, now being finalized in preparation for DOE's CD-1 milestone. Speaker: Jon Urheim (Indiana University) Top Quark Physics: Chaired by Kirill Melnikov 553 A Measurement of the ttbar Cross Section and the Top Quark Mass in the Hadronic Tau + Jets Decay Channel at CDF 20m We present a measurement of the ttbar cross section as well as the first measurement of the top quark mass in hadronic tau + jets events from 1.96 TeV ppbar collisions at CDF. Events require a single lepton identified as a hadronic tau, missing Et, and 4 jets of which at least one must be tagged as a b jet. Both the cross section and the mass are extracted from unbinned likelihood functions. The cross section uses a Poisson likelihood function based on the observed number of events and the predicted number of signal and background events for a given ttbar cross section. The mass is extracted from a likelihood fit is based on per-event probabilities calculated from leading-order signal (ttbar) and background (W+jets) matrix elements. Our goal is to directly identify this final state for the first time at CDF as well as to provide the first measurement of the top quark mass in this decay channel. Speaker: Daryl Hare (Rutgers University) Measurement of Tau Leptons from Top Quark Pair Production and Decay in ATLAS 20m The top quark can be used as a probe of new physics, particularly when it decays into non-SM particles. One promising example is the decay of a top quark into a charged Higgs boson and b-quark. Charged Higgs bosons occur naturally in extended Higgs sectors and for a wide range of models can decay nearly exclusively into a tau lepton and a neutrino. In this talk, we summarize the searches for tau leptons from SM top quark pair production and decay, and their interpretation as a probe of physics beyond the standard model using the ATLAS detector. Speaker: Allison McCarn (Department of Physics-Univ. Illinois at Urbana-Champaign) Top Quark Theoretical Cross Sections and pT and Rapidity Distributions 20m I present theoretical results for the top quark pair total cross section, and for the top quark transverse momentum and rapidity distributions at Tevatron and LHC energies. I also present results for single top quark production in the t- and s-channels and also via associated production with a W boson. The calculations include approximate NNLO corrections that are derived from NNLL soft-gluon resummation. Speaker: Prof. Nikolaos Kidonakis (Kennesaw State University) Measurements of the top quark pair production cross section at 7 TeV 20m We present several measurements of the top-pair production cross section in proton-proton collisions at the LHC at a centre-of-mass energy of 7 TeV. We use data collected with the CMS experiment during the year 2011. Measurements are presented in the lepton+jets final state, where events are selected by requiring exactly one isolated and highly energetic muon or electron, and at least four jets. In addition the di-lepton final state, consisting of two electrons or muons, at least two jets, and significant missing energy in the transverse plane, is used. We use b-jet identification in order to increase the purity of the selection. We present data-driven techniques to estimate the most important backgrounds, and discuss the systematic uncertainties on the measurements. The results, superseding previous measurements based on 2010 data, are combined and compared with the theory predictions. Speaker: Sadia Khalil (Kansas State University) A standard model explanation of a dijet excess in Wjj at CDF 20m The observation of a peak in the dijet invariant mass of the Wjj signal by the CDF Collaboration has caused great excitement. We demonstrate that this peak can be explained as the same upward fluctuations CDF observes in both t-channel and s-channel single-top-quark production. A peak in the dijet spectrum is expected, because CDF used a Monte Carlo simulation to subtract the single-top backgrounds instead of data. The D0 Collaboration has a small upward fluctuation in their published t-channel data; and, hence, we predict they would see a small peak in the dijet invariant mass spectrum of Wjj if they used Monte Carlo instead of data to subtract the single-top backgrounds. Speaker: Prof. Zack Sullivan (Illinois Institute of Technology) Search for new physics in ttbar + MET -> b bbar qqbar qqbar final state in ppbar collisions at sqrt(s) = 1.96 TeV 20m We present a search for a new particle T' decaying to a top quark via T'-> t + X, where X goes undetected. We use a data sample corresponding to 5.7 inverse fb of integrated luminosity of ppbar collisions with sqrt(s) = 1.96 TeV, collected at Fermilab Tevatron by the CDF II detector. Our search for pair production of T' is focused on the hadronic decay channel, ppbar -> T'Tbar' -> ttbar + XX -> bqqbar bbarqqbar +XX. We interpret our results in terms of a model where T' is an exotic fourth generation quark and X is a dark matter particle. The data are consistent with standard model expectations. We set a limit on the generic production of T'Tbar' -> ttbar + XX, excluding the fourth generation exotic quarks T'at 95% confidence level up to mT' = 400 GeV/c2 for mX < 70 GeV/c2. Speaker: Marco Bentivegna Welcome reception 2h Art Museum of the Rhode Island School of Design Art Museum of the Rhode Island School of Design Wednesday, 10 August Convener: Jonathan Rosner Field and String Theory 30m Speaker: Christopher Herzog (Princeton University) QCD: Experiment 30m Speaker: Prof. Joey Huston (Michigan State University) QCD: Theory 30m Speaker: Prof. Christian Bauer (UC Berkeley) Convener: Alice Bean (Department of Physics and Astronomy-University of Kansas) Hadron Spectroscopy 30m Speaker: Adam Szczepaniak (Indiana University) Liquid Quark-Gluon Plasma: Opportunities and Challenges 30m Speaker: KRISHNA Rajagopal (MIT) Heavy Ion Physics 30m Speaker: Barbara Jacak (Stony Brook University) Education & Outreach 29m Speaker: Michael Barnett (Physics Division-Lawrence Berkeley National Lab. (LBNL)) Physics Opportunities with Project X Ballroom A Fermilab Strategy with LBNE and Project X 10m Speaker: Dr Pier Oddone (Fermilab) Project X: Accelerator and Physics 30m Speakers: Brendan Casey (Fermilab), Stephen Holmes (Fermilab) DOE Sponsored Intensity Frontier Workshop 10m Speaker: Dr Michael Procario (DOE, Office of Science) DPF_2011_workshop.pdf DPF_2011_workshop.pptx Q/A 30m Project-X: stepping stone for future accelerator-based HEP at Fermilab 30m Fermilab is leading an international consortium to develop the design of "Project-X" which is an accelerator complex based on a new H- linac that will drive a broad range of experiments at the Intensity Frontier. Project X will provide multi-MW beams from the Main Injector over the energy range 60-120 GeV, simultaneous with mult-MW beams at 3 GeV. The Project-X research program includes world-leading sensitivity in long-baseline neutrino experiments, neutrino scattering experiments, and a rich program of ultra-rare decay and electric dipole moment experiments that are sensitive to most new physics scenarios beyond the Standard Model. Shared technology development with the International Linear Collider and the Muon Collider will establish a bridge to future facilities at the energy frontier. This talk will describe the Project-X accelerator configuration, associated performance projections, status of the accelerator and research program R&D and the strategy for moving forward. Speaker: Bob Tschirhart (-Fermilab) eRHIC collider design status 30m We pre­sent the de­sign of fu­ture high-en­er­gy high-lu­mi­nos­i­ty elec­tron-hadron col­lid­er at RHIC called eRHIC. We plan on adding 20 (po­ten­tial­ly 30) GeV en­er­gy re­cov­ery linacs to ac­cel­er­ate and to col­lide po­lar­ized and un­po­lar­ized elec­trons with hadrons in RHIC. The cen­ter-of-mass en­er­gy of eRHIC will range from 30 to 200 GeV. The lu­mi­nos­i­ty ex­ceed­ing 1034 cm-2 s-1 can be achieved in eRHIC using the low-be­ta in­ter­ac­tion re­gion with a 10 mrad crab cross­ing. We re­port on the progress of im­por­tant eRHIC R&D such as the high-cur­rent po­lar­ized elec­tron source, the co­her­ent elec­tron cool­ing and the com­pact mag­nets for re­cir­cu­lat­ing pass­es. A nat­u­ral stag­ing sce­nario of step-by-step in­creas­es of the elec­tron beam en­er­gy by bui­lid­ing-up of eRHIC's SRF linacs and a po­ten­tial of adding po­lar­ized positrons are also pre­sent­ed. Muon Collider Progress: Accelerators 30m A muon collider would be a powerful tool for exploring the energy-frontier with leptons, and would complement the studies now under way at the LHC. Such a device would offer several important benefits. Muons, like electrons, are point particles so the full center-of-mass energy is available for particle production. Moreover, on account of their higher mass, muons give rise to very little synchrotron radiation and produce very little beamstrahlung. The first feature permits the use of a circular collider that can make efficient use of the expensive RF system and whose footprint is compatible with an existing laboratory site. The second feature leads to a relatively narrow energy spread at the collision point. Designing an accelerator complex for a muon collider is a challenging task. Firstly, the muons are produced as a tertiary beam, so a high-power proton beam and a target that can withstand it are needed to provide the required luminosity of ~1 x 10^34 cm^–2s^–1. Secondly, the beam is initially produced with a large 6D phase space, which necessitates a scheme for reducing the muon beam emittance ("cooling"). Finally, the muon has a short lifetime so all beam manipulations must be done very rapidly. The Muon Accelerator Program, led by Fermilab and including a number of U.S. national laboratories and universities, has undertaken design and R&D activities aimed toward the eventual construction of a muon collider. Design features of such a facility and the supporting R&D program will be described. Speaker: Dr Michael Zisman (Lawrence Berkeley National Laboratory) The ATLAS Search for Resonances in the Inclusive Dijet Final State 20m I present the latest result from the ATLAS search for resonant production of new particles decaying in two jets, using data taken in 2011. Speaker: Dr Georgios Choudalakis (University of Chicago, Enrico Fermi Institute) Search for high mass dilepton resonances in pp collisions at sqrt(s)=7 TeV with the ATLAS experiment 20m The ATLAS detector has been used to search for high mass e e or mu mu resonances, such as new heavy neutral gauge bosons. This talk will present the latest search results for a high mass state decaying to dilepton pairs, in proton-proton collisions at a center of mass energy of 7 TeV at the Large Hadron Collider using data recorded by the ATLAS experiment in 2011. Speaker: Mr Dominick Olivito (University of Pennsylvania) Search for Randall-Sundrum Gravitons at the LHC, Recent Results from the ATLAS Collaboration 20m With a substantial increase in luminosity at the LHC, 2011 is an exciting time for searches for new physics. The Randall-Sundrum model, in which a warped extra dimension is introduced to resolve the hierarchy problem, predicts a spectrum of massive excited gravitons. There is significant potential for discovery of the lightest of these gravitons. The latest results from the ATLAS Collaboration for RS gravitons decaying to diphoton and dielectron final states will be presented. Speaker: Mr Evan Wulf (Columbia University) Searches for Large Extra Dimensions at CMS 20m Results of searches for Large Extra Dimensions (LED) in pp collisions at the center-of-mass energy of 7 TeV with the CMS detector are presented. Having analyzed the full 2011 dataset, we found no excess of events above the standard model (SM) expectations. We set stringent limits on the multi dimensional Planck scale as well as masses of exotic objects that are consequences of the LED. Speaker: Alexey Ferapontov (Department of Physics-Brown University) Search for GMSB SUSY and extra dimensions in diphoton+missing ET and Z+photon+missing ET final states at D0 20m We report the result of two searches for final states with either two photons and large missing transverse energy or with a Z boson, a photon and large missing transverse energy, using data collected with the D0 detector at the Fermilab Tevatron collider and corresponding to integrated luminosities of up to 6.3 fb-1. The result of these searches are interpreted in the framework of gauge mediated supersymmetry models and in models with extra dimensions and limits are set on the parameters of these models. Speaker: Yunhe Xie (Fermilab) Searches for Supersymmetry in Events with Photons and Missing Transverse Energy with the CMS Detector at the LHC 20m We present the results of searches for Supersymmetry in various topologies that lead to final states with jets, missing transverse momentum and one or two photons or a photon and a lepton. These searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data-driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in General Gauge Mediated Supersymmetry breaking models. Speaker: Duong Hai Nguyen (Department of Physics-Brown University) Two-loop corrections to W and Z boson production at high pT 30m I present new results for the complete two-loop corrections in the soft approximation for W and Z boson production at large transverse momentum. Analytical expressions for the NNLO approximate transverse momentum distributions are derived. Results for W boson production at Tevatron and LHC energies are presented. Measurement of the Transverse Momentum Distribution of Z/gamma* Bosons in 7TeV Proton-Proton Collisions with the ATLAS Detector 20m I present a measurement of the Z/gamma* transverse momentum (pTZ) distribution in proton-proton collisions at √s=7TeV using Z/gamma*->e+e− and Z/gamma*->μ+μ− decays in data samples corresponding to integrated luminosities of 35 pb^−1 and 40 pb^−1 respectively, taken in 2010 with the ATLAS detector. The normalized pTZ distributions are measured separately for electron and muon decay channels as well as for their combination up to pTZ of 350 GeV. The combined measurement is compared to predictions of perturbative QCD and various event generators. Speaker: Dr Jianbei Liu (University of Michigan) W boson mass and width measurements at D0 20m We present a precise measurement of W boson mass measurement in electron decay channel using data collected by the D0 detector at the Fermilab Tevatron collider. A binned likelihood fit method is used to extract the mass information from the transverse mass, the electron transverse momentum and missing transverse energy distributions. We also present a precise direct measurement of W boson width using the events with large transverse mass. The W mass result can be used to put stringent indirect limits on the SM Higgs boson mass. Speaker: Daniel Boline Study of Wgamma and Zgamma production at the LHC 20m We have used the ATLAS detector to study W and Z bosons produced with high energy photons in pp collisions at sqrt(s) = 7 TeV. We select Wgamma and Zgamma events from the interactions p+p -> l + nu + gamma + X and p+p -> l + l + gamma + X where the lepton is a muon or electron. The photon is required to be isolated and separated from the lepton(s) by dR(l-gamma)>0.7. The measurement is based upon data collected by the ATLAS experiment in 2011. The production cross sections and the kinematic distributions of the leptons and photons are compared to Standard Model predictions and to predicted sources of new physics. Speaker: Prof. Al Goshaw (Duke University) The Epsilon Expansion via Hypergeometric Functions and Differential Reduction 30m Higher-order diagrams required for radiative corrections to mixed electroweak and QCD processes at the LHC and anticipated future colliders will require numerically stable representations of the associated Feynman diagrams. The hypergeometric representation supplies an analytic framework that is useful for deriving such stable representations. We discuss the reduction of Feynman diagrams to master integrals, and compare integration-by-parts methods to differential reduction of hypergeometric functions. We describe the problem of constructing higher-order terms in the epsilon expansion, and characterize the functions generated in such expansions. Speaker: Prof. Scott Yost (The Citadel) Field and String Theory 551 B Convener: V. Parameswaran Nair (City College of New York) New Mathematics for Old Scattering Amplitudes 30m Scattering amplitudes have played a central role in quantum field theory since its inception. Recent years have seen remarkable progress in our understanding of their previously hidden mathematical simplicity, and in our ability to compute previously intractable scattering amplitudes, both for theoretical and phenomenological purposes. In this talk I will review several of the latest advances on scattering amplitudes in Yang-Mills theory, including on-shell methods and new mathematical technology for dealing with multi-loop amplitudes. Speaker: Marcus Spradlin (Brown University) Manifest SO(N) invariance and S-matrices of three-dimensional N=2,4,8 SYM 20m An on-shell formalism for the computation of S-matrices of SYM theories in three spacetime dimensions will be presented. The framework is a generalization of the spinor-helicity formalism in four dimensions. The formalism will be applied to establish the manifest SO(N) covariance of the on-shell superalgebra relevant to N =2,4 and 8 SYM theories in d=3. The results will be used to argue for the SO(N) invariance of the S-matrices of these theories: a claim which will be proved explicitly for the four-particle scattering amplitudes. Recursion relations relating tree amplitudes of three-dimensional SYM theories will be shown to follow from their four-dimensional counterparts. The results for the four-particle amplitudes will be shown to be verified by tree-level perturbative computations and a unitarity based construction of the integrand corresponding to the leading perturbative correction will also be presented for the N=8 theory. For N=8 SYM, the manifest SO(8) symmetry will be used to develop a map between the color-ordered amplitudes of the SYM and superconformal Chern-Simons theories, providing a direct connection between on-shell observables of D2 and M2-brane theories. Speaker: Dr Abhishek Agarwal (American Physical Society) Minimal Holography: Higher spin gravity from 2d CFTs 20m It was recently conjectured that higher spin gravity in three dimensions is holographically dual to a simple, exactly solveable conformal field theory called the W_N minimal model. This raises the possibility of tackling some difficult questions in holography or quantum gravity by performing exact computations at all values of the coupling. I will describe the motivation for studying simplified models of holography based on higher spin gravity, and prove that in this particular duality the spectrum matches exactly at large N. Speaker: Dr Thomas Hartman (Institute for Advanced Study) Fuzzy Twistors and Emergent Gravity 20m We describe a novel regulator of four-dimensional N = 4 Super Yang-Mills theory on a four-sphere. The regulator involves a lift of the theory to a large N matrix model on a non-commutative twistor space. As opposed to other known regulators, this regulator naturally retains both gauge invariance, and the symmetries of the spacetime. We present evidence that in the large N limit, the twistor matrix model correctly reproduces gauge theory scattering amplitudes. We further show that the 1 / N corrections describe an emergent gravitational sector which correctly reproduces tree level scattering amplitudes for Einstein gravity. Speaker: Dr Jonathan Heckman (Institute for Advanced Study) Large Nc Gauge Theories on the Lattice 20m We will present new results pertaining to large Nc gauge on the lattice. Two main topics will be (a) the phases of three dimensional large Nc gauge theories reduced to two dimensions; (b) single site realization of large Nc gauge theories with adjoint fermions. Speaker: Rajamani Narayanan (Florida International University) Heavy Flavor Physics: LHC techniques 554 Conveners: Alexey Petrov (Wayne State University), Prof. Christian Bauer (UC Berkeley), Owen Long (University of California Riverside) b-tagging Algorithms in the CMS experiment 18m The identification of b-jets is an important ingredient in characterizing top quark events and many new physics scenarios. The b-tagging algorithms developed within the CMS experiment are mainly based on the large lifetime of b-hadrons. The discriminators and variables defined by the various algorithms which characterize b-jets (e.g. track impact parameter, vertex properties) have been studied using data and compared to expectations from Monte Carlo simulations. In addition detailed studies to optimize track selection and assignment to the jet have been performed in different running conditions and compared with simulations. These studies have led to improvements and optimization of the software tools for the high event pileup scenarios during the 2011 LHC running. Speaker: Gavril Giurgiu (Rowland Dept. of Phys. and Astron.-Johns Hopkins University (JH) Efficiency measurement of b-tagging algorithms developed by the CMS experiment 18m Identification of jets originating from b quarks (b-tagging) is a key element of many physics analyses at the LHC. Various algorithms for b-tagging have been developed by the CMS experiment to identify b-tagged jets with a typical efficiency between 40% and 70% while keeping the rate of mis-identified light quark jets between 0.1% and 10%. An important step, in order to be able to use these tools in physics analysis, is the determination of the efficiency for tagging b-jets. Several methods to measure the efficiencies of the life-time based b-tagging algorithms are presented. Events that have jets with muons are used to enrich a jet sample in heavy flavor content. The efficiency measurement relies on the transverse momentum of the muon relative to the jet axis or on solving a system of equations which incorporate two uncorrected taggers. Another approach uses the number of b-tagged jets in top pair events to estimate the efficiency. The results obtained in 2010 data and the uncertainties obtained with the different techniques are reported. Speaker: Saptaparna Bhattacharya (Brown University) b-Tagging at ATLAS 18m The ATLAS detector, one of the two general purpose detectors at the LHC, has collected several hundred inverse picobarnes since the start of 2011 running. The large dataset has allowed deeper studies of bottom-quark tagging performance than before possible. Bottom-quark tagging is an important signal/background selection tool used in top analyses, SUSY analyses, Exotics analyses, and Standard Model analyses - anytime heavy flavor is important in the final state. In this talk I will give a very brief overview of ATLAS b-tagging and concentrate on the performance studies, calibrations, and lessons learned with this large dataset. Speaker: Gordon Watts (Department of Physics-University of Washington) Electron Vetos and Taus at ATLAS 18m I will present strategies used to separate electron signatures from tau lepton signatures with the ATLAS detector, one of the general purpose detectors on the LHC ring at CERN. Taus can decay leptonically, to electrons or muons and neutrinos, or hadronically, to a number of neutral and charged hadrons and neutrinos. These decays happen before the taus reach the inner-most layer of the detector, thus the work of recognizing the tau decay products is challenging. As electron and QCD signatures resemble those of taus, vetos must be applied. The results of those cut-based and multivariate electron veto techniques will be shown. Speaker: Ms Susie Bedikian (Physics Department-Yale University) Probing hot and dense nuclear matter with particle correlations and jets at RHIC 25m The high energy nucleus-nucleus collision at RHIC has produced the quark matter where quarks and gluons are believed to be deconfined. Single particle spectra has shown that parton lose significant amount of energy in such medium. It's therefore important to further explore the medium properties using multi-particle correlations and jets. In this contribution, we present recent results from RHIC on the following related analyses. We will discuss the studies of the "ridge" and the away-side correlation structure in central A+A collisions via multi-particles correlations. Higher order Fourier harmonics extracted from di-hadron correlations in comparison with initial density fluctuation models will be presented. Comparative analysis of hadron correlations with a high-energy particle vs. fully reconstructed jets will also be discussed. Speaker: Dr Hua Pei (University of Illinois at Chicago) The Rise and Fall of the Ridge at RHIC and the LHC 25m The centrality dependence of the low pt ridge correlations exhibits an interesting centrality dependence: it rises quickly with centrality but then in the most central collisions falls again. This centrality dependence is seen for 62.4 GeV, 200 GeV, and 2.76 TeV data. In this talk, I discuss how the rise and fall of the ridge demonstrates that the ridge is connected to the initial eccentricity. I discuss the connection of the away-side correlations to the near-side correlations and also explain why RHIC should collide Pb ions instead of Au ions. Speaker: Dr Paul Sorensen (BNL) Untriggered di-hadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}} =$ 2.76 TeV 25m We present measurements of untriggered di-hadron correlations as a function of centrality in Pb-Pb \sNN collisions, for charged hadrons with $p_{T} > 0.15$ GeV$/c$. These measurements provide a map of the bulk correlation structures in heavy-ion collisions. Contributions to these structures may come from jets, initial density fluctuations, elliptic flow, resonances, and/or momentum conservation. We decompose the measured correlation functions via a multi-parameter fit in order to extract the nearside Gaussian, the longer range $\Delta \eta$ correlation often referred to as the soft ridge. The effect of including higher harmonics ($v_{3}$ and $v_{4}$) in this procedure will be discussed. We investigate how the nearside Gaussian scales with the number of binary collisions. Finally, we show the charge dependence of the nearside Gaussian. Speaker: Dr Anthony Robert Timmins (University of Houston) Dihadron correlations in PbPb collisions at 2.76 TeV with CMS 25m Measurements of charged dihadron correlations from the CMS collaboration are presented for PbPb collisions at a center-of-mass energy of 2.76 TeV per nucleon pair over a broad range of pseudorapidity and the full range of azimuthal angle. With its large pseudorapidity coverage, the CMS tracker is ideally suited for detailed analyses of both short and long-range charged hadron correlations at the LHC. For the most central 0-5% collisions, a broadening of the away side dihadron correlation is observed at all pseudorapidities when compared to pp collisions. A significant correlated yield is observed for pairs of particles with small relative azimuthal angle but large longitudinal separation, commonly known as the ridge. The ridge persists out to relative pseudorapidity of 4 units and its effect is found to be stronger than what was previously observed at RHIC. The dependence of the ridge region shape and yield on transverse momentum and collision centrality has been measured. For particles of transverse momentum of 2--4 GeV/c, the ridge is found to be the most prominent when correlated to particles of 2--6 GeV/c, but diminishes at higher momentum. A Fourier analysis of the long-range two-particle correlations will be presented and discussed in the context of CMS measurements of higher order flow coefficients. Speaker: Yuting Bai (Physics Department-University of Illinois at Chicago) Sound Propagation on Top of the Fireball 20m We study the effect that initial state fluctuations have on final particle correlations in heavy ion collisions. More precisely, we focus on the propagation of initial perturbations on top of the expanding fireball using the conformal solution derived by Gubser and Yarom for central collisions. For small perturbations, the hydrodynamic equations are solved by separation of variables and the solutions for different modes are added up to construct initial point-like perturbations, that are then allowed to evolve until freeze-out. The Cooper-Frye prescription is used to determine the final particle distribution. We present the two-particle correlation functions and their Fourier spectra obtained for different viscosities. We find that viscosity kills the higher harmonics, but that the Fourier spectra presents maxima and minima, similar to what is seen in the study of Cosmic Background Radiation. The difference between the first and the second maximum is used to estimate the viscosity of the medium. Speaker: Pilar Staig (Stony Brook University) The Beginnings of Spontaneous Symmetry Breaking in Elementary Particle Theory 30m I will give a theoretical perspective on the topic of symmetry and symmetry breaking, particularly as the basis of the electroweak theory at the time of the initial formulation and development of the Standard Model. I will also briefly discuss some of these ideas restated in more modern terms. Speaker: Prof. Gerald Guralnik (Brown University, Providence RI 02912, USA) Electroweak symmetry breaking and the higgs beyond the standard model 30m I will provide a concise, coherent overview of electroweak symmetry breaking from a modern perspective, focusing on models that contain a naturally light higgs boson. In particular, I will review theories with supersymmetry and those in which the higgs field is a pseudo-Nambu-Goldstone boson. Speaker: Prof. Takemichi Okui (Florida State University) Search for charged and doubly-charged Higgs boson production in proton-antiproton collisions at 1.96 TeV 20m We present searches for charged Higgs prodution in decays of top quarks and also pair production of doubly charged Higgs boson decaying to di-tau, di-muon, and muon+tau final states. The searches are performed in proton-antiproton collisions at a centre of mass energy of 1.96 TeV using an integrated luminosity of up to 7 fb-1 collected by the CDF and D0 experiments at the Fermilab Tevatron Colllider. We find no evidence for charged Higgs production and set limits on the production cross-section for a variety of theoretical models. This represents the first search for pair production of doubly-charged Higgs bosons decaying into tau leptons at a hadron collider. Speaker: Louise Suter $H^\pm \rightarrow \chi^\pm \chi^0 \to 3\ell + E_{\mathrm{T}}^{\mathrm{miss}}$ Searches 20m In some supersymmetric (SUSY) models, a charged Higgs boson ($\rm H^\pm$) can decay into a chargino-neutralino ($\rm\chi^\pm_i \chi^0_j $) pair producing a final state containing three leptons (electron/muon) and missing transverse energy ($3\ell+E_{\mathrm{T}}^{\mathrm{miss}}$). Such a decay could provide extra sensitivity to the $\rm H^\pm$, especially in the region of SUSY parameter space near tan$\beta$ = 7, where the $\rm H^\pm$ decays to Standard Model particles have reduced significance. We present a signature search on ATLAS data, setting an exclusion limit on an excess of $3\ell+E_{\mathrm{T}}^{\mathrm{miss}}$ events over the Standard Model background. Such an excess could be evidence of generic SUSY, the $\rm H^\pm \to \chi^\pm_i \chi^0_j$ decay, or both. Speaker: Mr Caleb Lampen (University of Arizona) Neutrino Physics: Chaired by Morgan Wascko 550 Conveners: Danny Marfatia, Sam Zeller (FNAL) Neutrino-Nucleus Interactions 40m A thorough understanding of the physics of neutrino nucleus scattering continues to evade us even after 50 years of experimental work. This is mainly caused by the challenges of these experiments that include beams with large energy uncertainty, low event rates, and large backgrounds. Progress has been made in recent years with new results from improved experiments. It is important to continue this work as current and near-future neutrino oscillation experiments require better understanding of these neutrino-nucleus interactions. This talk will survey the current state of measurements and models and will examine future prospects for progress. Speaker: Dr Rex Tayloe (Dept of Physics, Indiana University) Highlights from MINERvA's first year 20m The MINERvA detector, operating since 2009 in the NuMI beam line at Fermilab, has collected neutrino and antineutrino scattering data on a variety of nuclear targets. The detector is designed to identify events originating in plastic scintillator, lead, carbon, iron, water, and liquid helium. The goals of the experiment are to measure precisely inclusive and exclusive cross sections for neutrino and antineutrino interactions for these targets. We present preliminary kinematic distributions for charged current quasi-elastic scattering and other processes. Speaker: Dr Aaron McGowan (University of Rochester) The MINERvA Detector: Description and Performance 20m The MINERvA experiment is aimed at precisely measuring the cross-sections for various neutrino interaction channels. It is located in Fermilab in an underground cavern in front of the MINOS near detector. MINERvA is a finely-grained scintillator with electromagnetic and hadron calorimetry regions. There are various nuclear targets located inside and in front of the detector for studying nuclear medium effects in neutrino-induced interactions. The installation was completed in March 2010 and since then the detector has been collecting data. In my talk, I will describe the structure of MINERvA detector, calibration procedures, and performance. I will also outline recent physics results related to the nuclear targets part of the detector. Speaker: Bari Osmanov (University of Florida) Charged Current Quasi-Elastic Scattering of Muon Neutrinos at the T2K Near Detector 20m T2K (Tokai-to-Kamioka) is a long-baseline neutrino oscillation experiment designed to search for electron neutrino appearance. An intense off-axis muon neutrino beam produced at the JPARC facility in Tokai is analyzed at two locations, the first a set of detectors 280 m from the production point (ND280) and the second the Super-Kamiokande detector (SK) 295 km away. The ND280 detectors can identify a variety of neutrino interaction processes including the charged current quasi-elastic (CCQE) interactions used in conjunction with the neutrino beam simulation to predict the neutrino flux and energy spectrum at SK. This "golden mode" provides precise measurements of the flux and spectrum and can also be used to measure the neutrino beam's flavor content. This talk will describe the results of the first inclusive charged current rate measurement made at ND280, the ability of the detector to identify exclusive channels, and the latest status of the CCQE measurement at ND280. Speaker: Mr Brian Kirby (University of British Columbia) Neutrino Studies with the T2K P0D Detector 20m The T2K experiment is an off-axis long baseline neutrino oscillation experiment. It utilizes the intense nu_mu beam generated at the J-PARC accelerator complex in Tokai, Japan. It has a near detector, ND280, at 280m from the proton target, and Super-Kamiokande as far detector at 295 km. The measurements of the neutral current pi0 and single charged current pi+ (as part of CC inclusive) cross-sections on water is necessary to understand the background for measurement of the theta13 mixing angle. However, these cross-sections are not known well in the energy region ~0.6GeV that is the peak energy of the T2K neutrino beam. This work presents the description and operations of P0D detector, a part of the ND280, and the overview of analyses being carried out with this detector. Speaker: Dmitriy Beznosko (NN Group SUNYSB) Conveners: Christina Mesropian (Rockefeller University-Unknown-Unknown), Sean Fleming (University of Arizona) Measurement of three-jet differential cross sections $\boldsymbol{d\sigma_{\text{3jet}} / dM_{\text{3jet}}}$ in $\boldsymbol{p\bar{p}}$ collisions at $\boldsymbol{\sqrt{s}=1.96}$ TeV 20m We present the first measurement of the inclusive three-jet differential cross section as a function of the invariant mass of the three jets with the largest transverse momenta in an event in $p\bar{p}$ collisions at $\sqrt{s}=1.96\, \mathrm{TeV}$. The measurement is made in different rapidity regions and for different jet transverse momentum requirements and is based on a data set corresponding to an integrated luminosity of $0.7\, \mathrm{fb}^{-1}$ collected with the D0 detector at the Fermilab Tevatron Collider. The results are used to test the three-jet matrix elements in perturbative QCD calculations at next-to-leading order in the strong coupling constant. The data allow discrimination between parametrizations of the parton distribution functions of the proton. Speaker: Lee Sawyer (College of Engineering and Science-Louisiana Technical Universi) Measurement of multi-jet cross-sections at ATLAS 20m Inclusive multi-jet production is studied using the ATLAS detector for proton-proton collisions with a center-of-mass energy of 7 TeV. The data sample corresponds to an integrated luminosity of 2.43 pb^-1, using the first proton-proton data collected by the ATLAS detector in 2010. Results on multi-jet cross sections are presented and compared to both leading-order plus parton-shower Monte Carlo predictions and next-to-leading-order QCD calculations. Speaker: Dr Matthew Cleary Tamsett (College of Engineering and Science-Louisiana Technical Universit) Second-Order Approximate Corrections for QCD Processes 20m I present generalized formulas for approximate corrections to QCD hard-scattering cross sections through second order in the perturbative expansion. The approximate results are based on recent two-loop calculations for soft and collinear emission near threshold and are illustrated by several applications to strong-interaction processes in hadron colliders. Measurements of W/Z boson production in associations with jets at D0 20m We present measurements of total and differential cross sections for the production of W or Z bosons in association with jets, including detailed study of the production of heavy flavor jets, using up to 6 fb-1 of ppbar collisions collected with the D0 detector at the Fermilab Tevatron collider. We present measurements of the inclusive W+n jets total cross sections (with n=1-4) and also differential cross sections for the n-th jet transverse momentum and rapidity. We also present measurements of the W/Z+b-jets production, two important background processes in the searches for the Higgs boson. All measurements are compared to NLO QCD calculations and to Monte Carlo simulations. Top Quark Physics: Chaired by Liang Li 553 Measurement of t-channel single top quark production at 7 TeV 20m We present a measurement of the inclusive single top production cross section in proton-proton collisions at the LHC at a centre-of-mass energy of 7 TeV, using data collected with the CMS experiment during the year 2011. The analysis considers decay channels where the W from the top decays into electron-neutrino or muon-neutrino, and makes use of kinematic characteristics of electroweak single top production for the separation of signal from backgrounds using multivariate methods. The result, which supersedes an earlier measurement based on 2010 data, is compared with the most precise standard model theory predictions. In addition, we present measurements of various differential single top quark production cross sections. Speaker: Dr Thomas Speer (Department of Physics-Brown University) Single top production in ppbar collisions at sqrt(s)=1.96 TeV with the D0 detector 20m We present measurements of the single top production cross in ppbar collisions at sqrt(s)=1.96 TeV using data collected with the D0 detector and corresponding to up to 5.4 fb-1 of integrated luminosity. We obtain both measurements of the inclusive production cross section as well measurements for the separate s- and t-channel production processes. These measurements are used to set constraints on the |Vtb| element of the CKM matrix. We also investigate for possible CP violation in the production of single top quarks and for the existence of possible resonances in the s-channel. Speaker: Reinhard Schwienhorst (Michigan State University) Measurement of Top Quark Properties in CDF 20m The top quark was discovered at the CDF and D0 experiments in 1995.Its properties within the Standard Model are fully defined. The measurement of the top quark mass and the verification of the expected properties have been an important topic of experimental top quark physics since. We will present the recent measurements of top quark properties in CDF. Speaker: Zhenbin Wu (Baylor University) Measurement of Single-top Quark Production with the ATLAS Detector 20m We use 2011 data from the ATLAS detector to isolate the production of single-top quarks. This electroweak top-quark production is expected to be sensitive to new physics such as flavor changing neutral currents or W' production, and can also be an important background for processes like the Higgs boson production. The data for this analysis are collected from collisions occurring at 7 TeV center-of-mass energy and then several selections are applied to these events. The selections are determined from studies of simulated events and chosen to isolate the signal while removing background events, based on the kinematic signature of the single-top quark process. We report the likelihood that the resulting sample of data events are single-top quarks and discuss the kinematics of this process. Speaker: Jenny Holzbauer (Michigan State University) Search for flavor changing neutral currents in decays of top quarks 20m We present a search for flavor changing neutral currents in decays of top quarks. The analysis is based on a search for $t\bar{t}\rightarrow\ell'\nu\ell\bar{\ell}$+jets ($\ell, \ell' = e,\mu$) final states using 4.1~{\rm fb}$^{-1}$ of integrated luminosity of $p\bar{p}$ collisions at $\sqrt{s} = 1.96$~{\rm TeV}. We extract limits on the branching ratio $B(t\rightarrow Zq)$ ($q = u, c$ quarks), assuming anomalous $tuZ$ or $tcZ$ couplings. We do not observe any sign of such anomalous coupling and set a limit of $B < 3.2\%$ at 95\% C.L. Speaker: Carrie McGivern (University of Kansas) DPF_2011_Talk_Final.key DPF_2011_Talk_Final.pdf DPF_2011_Talk_Final.ppt High Gradient RF Progress : Toward Tev-scale Accelerators 30m Research on the basic physics of high-gradient, high frequency accelerator structures and the associated RF/microwave technology are essential for the future of discovery science, medicine and biology, energy and environment, and national security. We will review the state-of-the-art for the development of high gradient linear accelerators. Speaker: Prof. Sami Tantawi (SLAC) Towards 20 T accelerator magnets: a road to super-high energy colliding beams 30m For a fixed size of a circular collider, its energy is limited by the strength of bending dipole magnets. Moreover, for both linear and circular machines, their maximum luminosity is determined (among other factors) by the strength of quadrupole magnets used for the final beam focusing. That is why there has been a permanent interest to higher-field and higher-field gradient accelerator magnets from the high-energy physics and particle accelerator community. The highest fields in accelerator magnets have been achieved using superconducting electromagnets. The ultimate field of these magnets is limited by the superconductor critical parameters such as critical field Bc2, critical temperature Tc and critical current density Jc. There are two classes of practical superconducting materials suitable for accelerator magnets - so called Low-Temperature Superconductors (NbTi, Nb3Sn, Nb3Al) and High-Temperature Superconductors (BSCCO, YBCO). The maximum field of NbTi accelerator magnets used in all present high-energy machines including LHC is limited by ~10 T at operating temperature ~1.8 K. The magnetic fields above 10 T threshold became possible thanks to the Nb3Sn superconductor. Nb3Sn accelerator magnets can provide operating fields up to ~15 T and significantly increase the coil temperature margin. Accelerator magnets with operating field above 15 T would require using high-field high-temperature superconductors, which have highest upper critical magnetic field Bc2. However, due to the substantially higher cost and lower critical current density in magnetic fields below 15 T, a hybrid approach with Nb3Sn superconductor in fields below 15 T is a quite attractive option even though the Nb3Sn and HTS materials require different coil processing. This paper discusses the status and main results of the state-of-the-art Nb3Sn accelerator magnets and outlines a roadmap towards the 20 T class magnets. Speaker: Dr Alexander Zlobin (Fermilab) SRF Technology for Particle Accelerators: Progress Report 30m The superconducting RF (SRF) technology is increasingly becoming the technology of choice for a wide range of particle accelerators. It has found applications in high energy and nuclear physics accelerators, spallation neutron sources, and light sources. The opportunities offered by the SRF technology, and its challenges, will be presented and reviewed. Speaker: Prof. Jean Delayen (Old Dominion University) High Power, High Energy Cyclotrons for Decay-At-Rest Neutrino Sources: The DAEdALUS Project 30m Neutrino physics from muon decay is very much at the forefront of today's physics research. Large detectors installed in deep underground locations perform neutrino mass, CP violation, and oscillation studies using long- and short-baseline beams of neutrinos from muons decaying in flight. DAEdALUS looks at neutrinos from stopped muons, "Decay At Rest (DAR)" neutrinos. The DAR neutrino spectrum has no electron antineutrinos (nu-e-bar) (pi-minus are absorbed to level of 10^-4), so a detector with much hydrogen (water-Cherenkov or liquid scintillator) is sensitive to appearance of nu-e-bar's oscillating from nu-mu-bar via inverse-beta-decay. Oscillations are studied using shorter baselines, less than 20 km reaching the same L/E range as the current and planned neutrino experiments originating at Fermilab. As the neutrino flux is not variable, nor is the energy, the baseline is varied: plans call for 3 accelerator-based neutrino sources at 1.5, 8 and 20 km with staggered beam-on cycles. Key is cost-effectively generating megawatt beams of 800 MeV protons. A superconducting ring cyclotron, accelerating H2+ ions is being designed by L. Calabretta and his group at INFN-LNS-Catania. Having a design peak power of 8 MW, the 5 emA circulating beam is extracted via a stripping foil, avoiding beam-loss problems that would be encountered in classical cyclotron extraction. The molecular hydrogen beam also reduces the severity of space charge effects at the low-energy central region for the injected beam. The system consists of two cascaded cyclotrons, and an axial injection line from an external microwave source. >20 emA of H2+ ions, CW, are seen with an available source. The injector cyclotron will bring beam to 50 MeV/a, and a short transfer line will take beam to the main ~15 meter diameter Ring cyclotron. This will consist of 8 sectors of superconducting magnets, with maximum field of 6 T. Isochronicity is maintained by field design and suitable trim coils. RF cavities between the magnet sectors accelerate the beam. An extraction channel of the lower-rigidity protons exiting the stripper foil is plotted through the highly-variable magnetic field, and exits cleanly from the machine. A large water-cooled graphite target provides the source of neutrinos from pi-mu decays. For DAEdALUS applications, each of the three machines will be run at ~20% duty factor, so events in the detector can be tagged unambiguously with a source. Timing is arbitrary, beam on time for each machine can range from seconds to days. Average power from each source still exceeds 1 MW, providing adequate neutrino flux at the detector for very fine sensitivity to the measurements desired. It should be noted that the original, and even revolutionary design of these accelerators can facilitate many other "ADS" (Accelerator-Driven Systems) applications, such as driving subcritical reactors, waste transmutation, etc. Speaker: Jose Alonso (MIT-LNS) Search for new physics with same-sign isolated dilepton events with jets and missing transverse energy at CMS 20m The results of searches for Supersymmetry in events with two same-sign isolated leptons, hadronic jets, and missing transverse energy in the final state are presented. The searches use pp collisions at 7 TeV collected in 2011 by the CMS experiment. Speaker: Marc Gabriel Weinberg (Department of Physics-University of Wisconsin) Search for Chargino-Neutralino Associated Production in Dilepton Final States with Tau Leptons 20m We present a search for chargino and neutralino supersymmetric particles yielding same signed dilepton final states including one hadronically decaying tau lepton using 6.0 fb^-1 of data collected by the the CDF II detector. This signature is important in SUSY models where, at high \tan{\beta}, the branching ratio of charginos and neutralinos to tau leptons becomes dominant. We study event acceptance, lepton identification cuts, and efficiencies. We set limits on the production cross section as a function of SUSY particle mass for certain generic models. Speaker: Mr Robert Forrest (UC Davis) Inclusive Search for Same-Sign Dileptons 20m We present an inclusive search for events with two isolated leptons ($e$ or $\mu$) of the same electric charge in $pp$ collisions at $\sqrt{s}=7$ TeV. The data are selected from events recorded in the ATLAS detector in 2011. With a small Standard Model background, the same sign dilepton signature is a powerful testing ground for new physics. The distributions of kinematical variables are presented, and compared to the Standard Model predictions. Speaker: Mr Benjamin Cerio (Physics Department-Duke University) Search for universal extra dimensions and supersymmetry in like-sign dimuon events using 7.3 fb-1 of D0 data 20m We present a search for universal extra dimensions (UED) and supersymmetry (SUSY) in the two like-sign muons final state. The data set corresponds to an integrated luminosity of 7.3 fb-1 collected by the D0 detector at a $p\bar{p}$ center of mass energy of 1.96 TeV at the Fermilab Tevatron Collider. No evidence for physics beyond the standard model is observed and limits are set on the size of the compactification scale R_c^−1 in the minimal UED model and on the SUSY parameter space in supergravity inspired models. Speaker: Jason Mansour (University of Göttingen) Search for Supersymmetry at CMS in events with three or more leptons 20m A search for physics beyond the Standard Model (SM) is performed using events with at least three leptons and any number of jets. The search is performed in data collected in 2011 by the CMS experiment at the LHC in pp-collisions at a center of mass energy of 7 TeV. Numerous leptonic channels have been investigated in an exclusive manner and data-driven techniques are used to quantify the SM backgrounds. The results are used to constrain hitherto unexplored regions of supersymmetry that have significant multilepton yield at 7 TeV pp-collisions. Speaker: Sho Maruyama (Department of Physics-University of California Davis (UCD)) Search for multilepton final states of supersymmetry at the ATLAS Detector 20m The results of a search for supersymmetry in multilepton final states using the ATLAS detector is presented. Such signals require three or more leptons, jets, and missing transverse energy. This channel provides the advantage that the contribution due to standard model backgrounds is expected to be very low. Results from the 2011 data-taking will be reported. Speaker: Jeremiah Jet Goodson (Department of Physics - State University of New York (SUNY)) Missing-ET insensitive search for new physics such as R-parity Violation with multileptons 20m Anticipating a data sample of the order of hundreds $pb^{-1}$ at a collision energy of 7 TeV by the CMS experiment at LHC in 2011, we probe new physics such as matter symmetry violation in the leptonic sector in theories with partner articles with a signature of three or more leptons in the final state. The search is organized to minimize reliance on specific kinematic variables to reduce SM backgrounds and we illustrate it by application to R-parity violating scenarios of new physics which are not necessarily accompanied by missing ET. We also estimate Standard Model backgrounds for individual channel with a maximal use of data-based methods to avoid reliance on simulation. Speaker: Sanjay Ravi Ratan Arora (Dept. of Physics and Astronomy-Rutgers, State Univ. of New Jers) Lattice weak matrix elements and CP violation in the LHC era 24m The role of lattice matrix elements in refined tests of the Standard Model and for search of new physics will be discussed. Although the results from the lattice in conjunction with data from B-factories provided a confirmation of the CKM-paradigm of CP violation a few years ago, since then improved calculations and better data from experiments is now yielding strong indications that the single phase in the CKM-matrix is not enough. Repercussions for some BSMs will also be discussed Speaker: Amarjit Soni (BNL) Bs -> J/psi phi at CMS 24m The Bs meson is studied with the CMS detector at LHC. The time-dependent measurements of the Bs decay into J/Psi Phi and of the rate of the rare decay into mu mu potentially provide indirect constraints on physics beyond the Standard Model. This talk presents the first cross section measurement for Bs->J/Psi phi production based on data taken in 2010 and discusses prospects of the lifetime-difference and CP measurements with CMS. Speaker: Giordano Cerizza (Department of Physics and Astronomy-College of Arts and Science) Search for CP violation in the Bs - Bsbar system with LHCb 24m The determination of the mixing induced CP-violating asymmetry in decays such as $B^0_s \to J/\psi \phi$ is one of the key goals of the LHCb experiment. Its value is predicted to be very small in the Standard Model but can be significantly enhanced in many models of New Physics. The steps towards a precise determination of this phase with a flavour-tagged, time-dependent, angular analysis of the decay $B^0_s \to J/\psi \phi$ will be presented, and first results shown from this measurement programme, using data collected in 2010 and the early months of the 2011 run. Results will also be shown, and prospects discussed, from related measurements. Speaker: Daan Van Eijk (NIKHEF) Measurement of CP violating parameters in the decay Bs -> J/psi phi 24m We report a new measurement of the $CP$-violating phase $\phi_s$, of the decay width difference for the two mass eigenstates $\Delta \Gamma_s$, of the mean $B^0_s$ lifetime $\overline{\tau}_s$, and of magnitudes of the decay amplitudes, from the flavor-tagged decay $B^0_s\to J/\psi \phi$. For the first time, we consider possible contributions from the decay $B^0_s \rightarrow J/\psi K^+K^-$, with the $K^+K^-$ in a $s$ wave. This measurement is based on 8 fb$^{-1}$ of $p\overline{p}$ collisions recorded with the D0 detector at a center-of-mass energy $\sqrt{s} = 1.96$ TeV at the Fermilab Tevatron collider. Speaker: Dr Avdhesh Chandra (Rice University) Long-Distance Dominance of the CP Asymmetry in B->X_{s,d}+gamma Decays 24m The CP asymmetry in inclusive b-> s gamma decays is an important probe of new physics. The theoretical prediction was thought to be of a perturbative origin. In the standard model the perturbative prediction for the asymmetry is about 0.5 percent. In a recent work with M. Benzke, S.J. Lee and M. Neubert , we have shown that the asymmetry is in fact dominated by non-perturbative effects. Since these are hard to estimate, it reduces the sensitivity to new physics effects. On the other hand, these new non-perturbative effects suggest a new test of new physics by looking at the difference of the CP asymmetries in charged versus neutral B-meson decays. Speaker: Dr gil paz (The University of Chicago) Detector Technology and R&D: Chaired by Andy White 556 Status of particle flow calorimetry 30m Summary of challenges, development, and prospects. Speaker: Dr Jose Repond (Argonne National Laboratory) Performance of the CMS Electromagnetic Calorimeter at the LHC 15m The CMS Electromagnetic Calorimeter (ECAL) is a high resolution, fine grained calorimeter devised to measure photons and electrons at the LHC. Built of lead tungstate crystals, it plays a crucial role in the search for new physics as well as in precision measurements of the Standard Model. A pre-shower detector composed of sandwiches of lead and silicon strips improves pi-0/gamma separation in the forward region. The operation and performance of the ECAL during the 2010 run at the LHC, with pp collisions at √s = 7 TeV will be reviewed, and to some extent for the 2011 running as well. Pure samples of electrons and photons from decays of known resonances have been exploited to improve and verify the trigger efficiency, the reconstruction algorithms, the detector calibration and stability, and the particle identification efficiency. A review of these aspects will be given. Speaker: Marco Grassi (Universita di Roma I 'La Sapienza'-Universita e INFN, Roma I) Performance of Particle Identification with the ATLAS Transition Radiation Tracker > 15m The ATLAS Transition Radiation Tracker (TRT) is the outermost of the three sub-systems of the ATLAS Inner Detector at the Large Hadron Collider at CERN. In addition to its tracking capabilities, the TRT provides particle identification (PID) ability through the detection of transition radiation X-ray photons. The latter functionality provides substantial discriminating power between electrons and hadrons in the momentum range from 1 to 200 GeV. In addition, the measurement of an enhancement of signal time length, which is related to high specific energy deposition (dE/dx), can be used to identify highly ionizing particles, increasing the electron identification capabilities at low momentum and improving the sensitivity of searches for new physics. This talk presents the commissioning of TRT PID during early 2010 7 TeV data taking. Performance in 2010 and 2011 demonstrating the TRT's ability to identify electrons, complementary to calorimeter based identification methods, will also be shown. Speaker: Elizabeth Hines (Department of Physics and Astronomy-University of Pennsylvania) Calibration and Performance of the ATLAS Muon Spectrometer 15m The ATLAS muon spectrometer is designed to measure muon momenta with a resolution of 4% @ 100 GeV/c rising to 10% @ 1 TeV/c track momentum. The spectrometer consists of precision tracking and trigger chambers embedded in a 2T magnetic field generated by three large air-core superconducting toroids. The precision detectors provide 50 micron tracking resolution to a pseudo-rapidity of 2.7. The system also includes an optical monitoring system which measures detector positions with 40 micron precision. I will report on the calibration and performance of the ATLAS muon spectrometer in the first year of LHC data. Speaker: Dr Edward Diehl (University of Michigan) Frequency Scanned Interferometry for ILC Tracker Alignment 15m In order to exploit fully the physics potential of future lepton colliders, highly precise tracking systems will be needed, for which systematic alignment uncertainties must be small. We describe ongoing R&D in frequency scanned interferometry (FSI) to be applied to alignment monitoring of a detector's charged particle tracking system, in addition to its beam pipe and final-focus quadrupole magnets. In FSI alignment, one measures hundreds of absolute point-to-point distances of detector elements in 3 dimensions by using an array of beams split from a central laser. We report here on progress using a dual-laser FSI single-channel prototype. Dual lasers with oppositely scanned frequency directions permit cancellation of many systematic errors, making the alignment robust against vibrations and environmental disturbances. Under realistic environmental conditions, a precision of about 0.2 microns was achieved for a distance of about 40 cm for the prototype. Work is now under way to demonstrate a multi-channel system on the bench. Recent progress will be summarized. Speaker: Dr Haijun Yang (University of Michigan) Conveners: Michael Barnett (Lawrence Berkeley National Lab), Prof. Snow Gregory (University of Nebraska in Lincoln) Particle Physics Masterclasses 12m The IPPOG and U.S. particle physics masterclasses took place worldwide in March, 2011. For the first time, all masterclasses used real LHC data. Students in the U.S. masterclasses (that included participants in several countries outside the U.S.) analyzed both ATLAS and CMS data. QuarkNet has been evaluating the U.S. effort since 2008. The design of the LHC masterclasses and the results of this study will be discussed. Speaker: Kenneth William Cecire (University of Notre Dame) CMS Data for High School Teachers and Students 13m The CMS Collaboration has released more than a quarter of a million 7 TeV proton-proton events that contain pairs of muons, electrons or jets with 2-body invariant masses in the range 0 to 100 GeV for student and teacher investigations. QuarkNet and I2U2 have developed software to exploit these data in a manner similar to that of the front-line physicists: a 3-D web-based event display to understand particle interactions in the detector and a histogramming package to create and examine mass plots after making cuts on the data. An additional package allows the students to produce on-line posters featuring their results. The data include large numbers of events corresponding to the presence of J/psi, W and Z particles, allowing students to make "discoveries". We describe the released data and three initiatives that allow exploration: a student masterclass, a more in-depth web-based "e-Lab" and a summer 2011 teacher workshop to be held at Fermilab. Speaker: MIke Fetsko (Goodwin High School, Virginia) Classroom Cosmic Rays: Detectors and Analysis 15m Since 1998, QuarkNet has provided over 500 cosmic ray muon detectors to teachers in the project and collaborators on similar projects. The detector relies on GPS accuracy for time-stamping PMT pulses from four scintillation counters. The DAQ is now in its third revision. Students and teachers use the detector to carry out small experiments to measure properties of cosmic ray muons. Other groups, most international, have purchased the detector for similar uses. We will describe the detector, its installation base and our web-based data-sharing and analysis portal. Speaker: Thomas Jordan (University of Florida/Fermilab) Bringing the LHC and ATLAS to a planetarium 15m An outreach effort has started at Michigan State University to bring the physics of the LHC and the ATLAS detector to the Abrams planetarium on the MSU campus. MSU graduate and undergraduate students from Physics as well as from the College of Communication Arts & Sciences are putting together planetarium content on the LHC and its connection to astronomy, the big bang, and dark matter. I will report on this effort and present a first short clip. Speaker: Prof. Reinhard Schwienhorst (Michigan State University) Celebrating 30 Years of K-12 Educational Programming at Fermilab 25m In 1980 Leon Lederman started Saturday Morning Physics with a handful of volunteer physicists, around 300 students and all the physics teachers who tagged along. Today Fermilab offers over 30 programs annually with help from 250 staff volunteers and 50 educators, and serves around 40,000 students and 2,500 teachers. Find out why we bother. Over the years we have learned to take advantage of opportunities and confront challenges to offer effective programs for teachers and students alike. We offer research experiences for secondary school teachers and high school students. We collaborate with educators to design and run programs that meet their needs and interests. Popular school programs include classroom presentations, experience-based field trips, and high school tours. Through our work in QuarkNet and I2U2, we make real particle physics data available to high school students in data-driven activities as well as masterclasses and e-Labs. Our professional development activities include a Teacher Resource Center and workshops where teachers participate in authentic learning experiences as their students would. We offer informal classes for kids and host events where children and adults enjoy the world of science. Our website hosts a wealth of online resources. Funded by the U.S. Department of Energy, the National Science Foundation and Fermilab Friends for Science Education, our programs reach out across Illinois, throughout the United States and even around the world. We will review the program portfolio and share comments from the volunteers and participants. Speaker: Marjorie Bardeen (Fermilab) Round Table Discussion 40m Speaker: Presenters and participants All Heavy Flavor Physics: EW penguins and Charm 554 BABAR Results on Leptonic and Radiative B and Charm Decays 18m We present present measurements of B and charm flavor changing neutral current processes, including inclusive and exclusive b->s gamma and B -> K nu nubar. We also present results of BABAR studies of leptonic decays of charged B and Ds mesons, in particular B -> tau nu and Ds->tau/mu nu. Speaker: Brad Wray (University of Texas at Austin) Updated Search for Non-SM Physics in B->K(*)mumu Decays at CDF 18m We present updated measurements of branching fractions, polarization, and muon forward-backward asymmetry in B-->K mu mu final states using 6.7/fb of data collected by the CDF detector. A search for Lambda_b --> Lambda mu mu decays will also be shown. The results are the most sensitive from a single experiment to date. Speaker: Prof. Austin Napier (Tufts University) Electroweak Penguin decays at LHCb 18m Promising ways to search for New Physics effects in radiative penguin decays are in the angular analysis of $B^0 \to K^{*} \mu^+\mu^-$, in the measurement of direct CP violation in $B^0 \to K^{*} \gamma$ and a time dependent analysis of $B^0_s \to \phi \gamma$. All of these studies are being pursued at LHCb. First results will be shown from the 2010 and early 2011 data, with particular emphasis on $B^0 \to K^{*} \mu^+ \mu^-$. Speaker: Thomas Blake (Imperial College Sci., Tech. & Med.) Semileptonic B and Charm Decays with BABAR 18m We present recent results of studies of semileptonic B and charm decays from BABAR. In particular, we describe a recent measurement of the B-> D(*)tau nu branching fraction, and a study of Bs production and semileptonic decays using BABAR data collected above the Upsilon(4S). We also discuss the determination of |Vub| from exclusive B->pi/rho l nu and and from fully inclusive measurements and present recent branching fraction measurements of B->Lamdba_c p X l nu, B -> Ds K l nu and D+->K-pi+ e nu. Speaker: Brian Hamilton (University Of Maryland) Measurements of Charm Mixing and CP Violation at Belle 18m We report an improved measurement of D^0-D^0bar mixing using the time dependence of the Dalitz plot of the decay mode D^0->K_S^0 pi^+ pi^-. In addition, we report results of searches for CP violation in the decays D^0->K_S^0 P^0 where P0 denotes a neutral pseudo-scalar meson which is either a pi^0, eta or eta^'. The result for D^0->K_S pi^0 is the most sensitive search for CP violation in the D^0 system to date. These results are based on a large data sample collected by the Belle detector at the KEKB asymmetric energy electron positron collider. Speaker: Debabrata Mohapatra (Virginia Tech) Results and prospects for Charm Physics at LHCb 18m Precision measurements in charm physics offer a window into a unique sector of potential New Physics interactions. LHCb is well equipped to take advantage of the enormous production cross-section of charm mesons in $pp$ collisions at $\sqrt{s}=7$~TeV. The measurement of the $D^0 -\bar{D^0}$ mixing parameters and the search for CP-violation in the charm sector are key physics goals of the LHCb programme. Results will be shown, based on the data collected in 2010, and the first few months of the 2011 run. Speaker: Silvia Borghi (University of Glasgow) Jets and Jet-like Correlations at RHIC 30m I will present an overview of recent results on jets and jet-like correlation measurements from the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory. Jets are produced in the initial hard scatterings of an event and can therefore be exploited as probes of the hot and dense medium produced in heavy-ion collisions. Previous RHIC results indicate that this medium, the Quark Gluon Plasma (QGP), is strongly coupled, with partonic degrees of freedom. High pT colored partons passing through the sQGP therefore believed to suffer energy loss via induced gluon radiation and elastic collisions, before exiting the medium and fragmenting in vacuum. Jet reconstruction and high pT correlation studies allow us to investigate how the partons interact with the medium and how the medium responds to the partons moving through it. By comparing measurements from p-p and d-Au to those in Au-Au collisions at sqrt(s) = 200 GeV we aim to disentangle cold nuclear matter effects from those of the hot and dense sQGP. Speaker: Helen Louise Caines (Physics Department-Yale University) Results from Pb+Pb Collisions with the ATLAS Detector at the LHC 30m A broad program of measurements using heavy ion collisions is underway in ATLAS, with the aim of studying the properties of QCD matter at high temperatures and densities. This talk describes measurements performed using up to 9 µb-1 of lead-lead collision data provided at a nucleon-nucleon center-of-mass energy of 2.76 GeV by the Large Hadron Collider and collected by the ATLAS Detector during November and December 2010. We will be presenting results on inclusive charged particle multiplicities and elliptic flow to study the global features of the collisions as a function of centrality, pseudorapidity and transverse energy. Higher order Fourier coefficients will also be shown to assess the importance of more complicated event-wise geometric fluctuations. The study of the microscopic properties of the system will be addressed with high pT probes. Muon measurements provide access to W and Z bosons which are potentially sensitive to modifications of the nuclear PDFs, as well as heavy flavor. Charged particle spectra, particularly at high pT, are sensitive to the overall suppression of jets and their modified fragmentation. Finally, jet rates, asymmetries and fragmentation properties offer a more direct look at the physics of jet quenching than has been available at previous facilities. Speaker: Dr Peter Alan Steinberg (Brookhaven National Laboratory (BNL)) Studies of Jet Quenching in PbPb Collisions at CMS 30m Jets are an important tool to probe the hot, dense medium which is produced in ultra-relativistic heavy ion collisions. Copious production of hard processes, well above the heavy ion background, occurs at the Large Hadron Collider due to the large increase in collision energy. The multipurpose Compact Muon Solenoid (CMS) detector is well designed to measure the hard scattering processes with its high quality calorimeters and high precision silicon tracker. Jet quenching has been studied in CMS in PbPb collisions at $\sqrt{s_{NN}}=$\,2.76~TeV. As a function of centrality, dijet events with a high pT leading jet were found to have an increasing momentum imbalance that was significantly larger than those predicted by simulations. The angular distribution of jet fragmentation products has been explored by associating charged tracks with the dijets observed in the calorimeters. The calorimeter-based momentum imbalance is reflected in the associated track distributions, which show a softening and widening of the subleading jet fragmentation pattern. Studies of the missing transverse momentum projected on the jet axis have shown that the overall momentum balance can be recovered if tracks at low pT are included. In the PbPb data, but not in the simulations, a large fraction of the balancing momentum is carried by soft particles radiated at large angle relative to the jets. Speaker: george Stephans (Laboratory for Nuclear Science (LNS)-Massachusetts Inst. of Tec) Parton showers as sources of energy-momentum deposition in the QGP and implications for jet observables 25m I present results on the derivation of the distribution of energy and momentum transmitted from a primary fast parton and its medium-induced bremsstrahlung gluons to a thermalized quark-gluon plasma. The calculation takes into account the important and thus far neglected effects of quantum interference between the resulting color currents. From the result I obtain the rate at which energy is absorbed by the medium as a function of time and find that the rate is modified by the quantum interference between the primary parton and secondary gluons. This Landau-Pomeranchuk-Migdal type interference persists for time scales relevant to heavy ion phenomenology. The newly derived source of energy and momentum deposition is coupled to linearized hydrodynamics to obtain the bulk medium response to realistic parton propagation and splitting in the quark-gluon plasma. Implications for jet observables are discussed. Speaker: Dr Richard Neufeld (LANL) The Wake of a Quark Moving Through Hot QCD Plasma vs. N=4 SYM Plasma 20m We present the energy density and flux distribution of a quark moving through the high temperature QCD plasma and compare it with that in strongly coupled N=4 SYM plasma. The Boltzmann equation is reformulated as a Fokker-Planck equation at leading log approximation and is solved numerically with non-trivial boundary conditions in momentum space. We use the kinetic theory and take the Fourier transform to calculate the energy and momentum density in real space. The angular distribution exhibits the transition to the ideal hydrodynamics and is analyzed with the first and second order hydrodynamic source. The AdS/CFT correspondence allows the same calculation in strong coupling regime. Compared to the kinetic theory, the energy-momentum tensor is better described by hydrodynamics even after accounting for the differences in the shear viscosities. We argue that the difference between Boltzmann and AdS/CFT comes from the second order hydrodynamic coefficient tau_pi, which is generically large compared to the shear length in a theory based on the Boltzmann equation. Speaker: JUHEE HONG (Stony Brook University) A Search For The Higgs Boson In H -->Gamma Gamma Mode 19m We report on a search for SM Higgs Boson in the mode H --> gamma gamma conducted by the CMS experiment with the data accumulated during the 2010 & 2011 running of the LHC at sqrt(s) = 7 TeV. Speaker: Christopher Allan Palmer (Department of Physics-Univ. of California San Diego (UCSD)) A Search For The Higgs Boson In H --> WW 19m We report on a search for the Higgs boson in the decay mode H --> WW based on data collected by the CMS experiment during the 2010+2011 running of the LHC. Speaker: Kevin Kai Hong Sung (Massachusetts Inst. of Technology (MIT)) Search For The Standard Model Higgs Boson In The WH->lnubb And H->WW->lnulnu Decay Modes 19m Results for a Higgs boson search by the ATLAS experiment in the WH->lnubb and H->WW->lnulnu decay modes using a multivariate approach are presented. The results are based on data taken in 2011 at 7 TeV center-of-mass energy. No evidence is found for a Standard Model-like Higgs boson in either decay mode. Exclusion limits in terms of the ratio to expected SM rate are reported in Higgs mass ranges of 115-130 GeV and 120-600 GeV respectively in the two modes. Higgs Tansverse Momentum Distributions at the LHC 19m In this talk I will present a factorization theorem for the Higgs transverse momentum spectrum using SCET (Soft Collinear Effective Theory). This theorem allows us systematically resum large logarithms of $p_T$ in the regime $m_h \gg p_T \gg \Lambda_{QCD}$. The transverse momentum distributions of Higgs produced via gluon fusion will be presented and compared to previous results derived using effective field theory, as well as the results of Collins Soper and Sterman (CSS). The differences will be illuminated. I will also present new results for the Higgs $p_T$ spectrum for b-quark fussion. Updated Search for Standard Model Higgs to WW Production Using up to 8.2 fb-1 at the Tevatron 19m We report on the search for Standard Model (SM) Higgs boson to WW production in the final state of two charged leptons (e,mu) and two neutrinos from the collision of p-pbar pairs at sqrt(s) = 1.96 TeV. The data corresponds to 8.2 fb-1 by the CDF II detector and 8.2 fb-1 by the DZero detector on the Tevatron collider at Fermilab. The CDF version of the analysis implemented several improvements over the previous versions reported in the spring. In the CDF update, track and calorimeter isolation quantities for the leptons were recalculated to prevent mutual spoilage when two candidates are in close proximity to each other. Additionally, CDF has introduced a likelihood based category for forward electrons to recover candidates failing the original and still present cut based category. To maximize signal acceptance, events with same-sign dileptons and trileptons are included as separate regions to account for associated Higgs production with a Z or W boson via vector boson fusion. Additionally, the CDF analysis includes events with low dilepton invariant mass are included in a separate region to further improve acceptance. We then set confidence level limits at nineteen Higgs masses between 110 GeV and 200 GeV. Speaker: Benjamin Carls Searches for Diboson Production in the Lepton + MET + Jets Final State in ATLAS 19m The study of diboson production at high energy colliders tests the electroweak sector of the standard model (SM) and provides a sensitive probe of new physics beyond the SM. An important example is the production of a Higgs boson with mass greater than 140 GeV/c^2 which decays primarily to W boson pairs. The diboson decay channel where one W boson decays to leptons and the other vector boson decays to quarks leading to high energy jets is particularly interesting due to its large branching fraction as compared to all-leptoninc channels but is also challenging due to large backgrounds, particularly from W+jets. We present searches for diboson production in the lepton + MET+ jets final state using \sqrt(s)=7 TeV collision data collected by the ATLAS detector during the 2011 run. Particular emphasis is placed on searches for (1) the SM Higgs boson with mass above the W pair production threshold and (2) SM WW+WZ production. A Search For An Exotic Higgs In The Decay Mode H++ --> l+l+ 19m We report on a search for a doubly charged Higgs H++ --> l+l+ conducted by the CMS experiment with the data accumulated during the 2010 & 2011 running of the LHC at sqrt(s) = 7 TeV. Speaker: Maxwell Chertok (Department of Physics-University of California Davis (UCD)) Low Energy Searches for Physics Beyond the Standard Model 552 A () The COMET Experiment to Search for Muon to Electron Conversion 20m Speaker: Mark Lancaster (UCL) The Mu2e Experiment at Fermilab 20m The goal of the Mu2e experiment is to improve on the existing experimental limits for the neutrinoless conversion of a muon into an electron by four orders of magnitude. Such sensitivity means that if low-energy supersymmetry is discovered at the LHC, Mu2e will provide complimentary information. Even in the absence of new physics at the TeV scale, Mu2e could still find evidence for new physics at mass scales up to 10^4 TeV. In this talk I will give a brief account of the theoretical motivation for the experiment, the current status of the design, planned methods to detect the conversion electron and to suppress backgrounds, and the expected sensitivity of the Mu2e experiment. Speaker: Prof. Craig Group (University of Virginia) Results from MEG Experiment 20m Speaker: donato nicolo (pisa university) The Fermilab Muon (g-2) Experiment 20m Fermilab E989 has the goal to improve on the precision of the muon anomalous magnetic moment, a_mu = (g_mu - 2)/2 by at least a factor of 4 beyond the 0.54~ppm relative precision obtained in E821 at Brookhaven. The precision storage ring will be relocated to Fermilab and installed in a new building. A new 8~GeV/c proton beamline and 3.1~GeV/c muon beamline will be built. The unique capabilities of Fermilab to produce a proton beam with pulses containing ~ 1 X 10^{12} protons at an advantageous duty factor will provide the necessary increase of statistics in a reasonable running time. This new experiment should clarify the apparent > 3 sigma difference between the experimental and Standard-Model values of a_mu. Speaker: Bradley Lee Roberts (Boston University) 483_BLRoberts-g-2.pdf 483_BLRoberts-g-2.ppt The Enriched Xenon Observatory for double beta decay 20m The Enriched Xenon Observatory (EXO) is an experimental program designed to search for the neutrinoless double beta decay (0nbb) of Xe-136. Observation of 0nbb would determine an absolute mass scale for neutrinos, prove that neutrinos are massive Majorana particles (indistinguishable from their own antiparticles), and constitute physics beyond the Standard Model. The current phase of the experiment, EXO-200, uses 200 kg of liquid xenon with 80% enrichment in Xe-136, and also serves as a prototype for a future 1-10 ton scale EXO experiment. The double beta decay of xenon is detected in an ultra-low background time projection chamber (TPC) by collecting both the scintillation light and the ionization charge. The detector is now operational at Waste Isolation Pilot Plant (WIPP) in New Mexico. It was first run with natural xenon to fully commission it and study its performance. Preparation for physics data taking is underway. The projected two-year sensitivity for neutrinoless double beta decay half-life is 6.4E25 y at 90% confidence level. In view of a future ton scale experiment, the collaboration is performing R&D to realize an ideal, background-free search for which the daughter nucleus produced by the double beta decay is also individually identified. In this talk, the current status and preliminary results from EXO-200 will be presented, and prospects for a ton scale EXO experiment will be discussed. Speaker: Timothy Daniels (University of Massachusetts) Particle Astrophysics and Cosmology: Chaired by Jocelyn Monroe 550 Conveners: Jocelyn Monroe (MIT), Dr Scott Dodelson (Fermilab/UChicago) The MiniCLEAN Dark Matter Experiment 20m The MiniCLEAN dark matter experiment exploits a single-phase liquid argon (LAr) detector, instrumented with photomultiplier tubes submerged in the cryogen with nearly 4pi coverage of a 500 kg (150 kg) target (fiducial) mass. The high light yield and unique properties of the scintillation time-profile in LAr provide effective defense against radioactive backgrounds through pulse-shape discrimination and event position-reconstruction. The detector is also designed for a liquid neon target which, in the event of a positive signal in LAr, will enable an independent verification of backgrounds and provide a unique test of the expected A^2 dependence of the WIMP interaction rate. The conceptually simple design can be scaled to target masses in excess of 10 tons in a relatively straightforward and economic manner. The experimental technique and current status of MiniCLEAN will be summarized. Speaker: Dr Andrew Hime (Los Alamos National Laboratory) The DarkSide Program at LNGS 20m DarkSide is a direct detection dark matter program based on two-phase argon time projection chambers using argon from underground sources that is naturally depleted in 39Ar. DarkSide-50, the first physics detector in the DarkSide program, will be deployed within the Borexino CTF tank in Gran Sasso Laboratory, Italy. The unique combination of the CTF muon veto, ultra-low background construction techniques, depleted argon, and a dedicated high efficiency neutron veto based on boron-loaded liquid scintillator should give DarkSide-50 the ability to convincingly demonstrate a background expectation of a fraction of an event in a 0.1 tonne-year exposure. This will not only give the experiment the ability to probe for WIMP interactions with a cross-section sensitivity of 10E-45cm2, but also allow it to demonstrate the ability of larger, tonne-scale, detectors in the DarkSide program to operate background free. Speaker: Alex Wright (Princeton University) Constraining Light Dark Matter with CDMS II and SuperCDMS 20m There has been much recent interest in Weakly Interacting Massive Particle (WIMP) models with masses below 10 GeV/c^2. Data from the Cryogenic Dark Matter Search (CDMS II) have been reanalyzed to give increased sensitivity to these models. Using a lowered, 2 keV recoil energy threshold, we have reanalyzed data from eight germanium detectors operated at the Soudan Underground Laboratory, and used these data to place constraints on light WIMP models. We discuss the compatibility of these results with possible low-mass WIMP signals from the DAMA/LIBRA and CoGeNT experiments, and also discuss prospects for improving SuperCDMS sensitivity to light WIMPs by operating existing detectors in a high-voltage mode. Speaker: Scott Hertel (Department of Physics) Status of the LUX Dark Matter Experiment 20m The Large Underground Xenon (LUX) experiment will facilitate direct detection of Weakly Interacting Massive Particles (WIMPs) with a 350 kg xenon TPC. LUX will be able to detect 100 GeV WIMPs with scalar cross section as low as 7e−46 cm2, equivalent to ~0.5 events/100 kg/month in a 100 kg inner fiducial volume. Electromagnetic background event rates are limited below 5e−4 events/keV/kg/day by an extensive screening and background modeling program, and assume a conservative 99.5% electron recoil event rejection and 50% nuclear recoil acceptance for WIMP signatures. LUX is currently in the initial deployment phase at the Sanford Surface Laboratory at Homestake, during which all detector hardware and the full electronics chain are being extensively characterized. A miniature water shield reduces ambient electromagnetic backgrounds to rates allowing data taking via radioactive calibration sources. The underground deployment phase will begin in November 2011, with WIMP search data taking beginning shortly thereafter. LUX will surpass all existing dark matter limits for WIMPs with mass above ~10 GeV within days after beginning its science run. Speaker: Mr Jeremy Chapman (Brown) Status of CoGeNT 20m Recent results from CoGeNT and future directions will be discussed. Speaker: Juan Collar Colmenero (University of Chicago) After LUX: The LZ Program 20m The Large Underground Xenon (LUX) dark matter search experiment is currently being deployed at the Sanford Laboratory at Homestake in South Dakota (see Rick Gaistkell's talk), as a precursor to DUSEL. In partnership with more international institutions, we are already thinking about the next (two) experiment(s) that will follow: LZ-S (3 t) and LZ-D (20 t). This talk describes the work accomplished to date, the direction we are going, and the expected science schedule. Speaker: Mr David Malling (Brown University) Top Quark Physics: Chaired by Nick Kidonakis 553 HollowConeSieveForTops 20m The LHC is a top factory: in the SM about 8,000 top pairs should have been produced with more than 47 pb−1 integrated luminosity already taken per detector at 7 TeV. Since the LHC center-of-mass energy is high compared to the top mass, the tops will typically be highly boosted, so that the decay products are close to each other. Thus, in the detector, at first sight, the top decay products may look like a fat jet instead of the several separate ones of which it is composed. In the top reconstruction, to catch all the three main decay products of a top a a single fat jet, it is natural to use a large jet size. If a large R = 1.5 is used, it is likely that two jets will be constructed in the event (one from the top and the other from the t), while more will be constructed using a small R. The light jets behave differently from the top jets, in that the number of reconstructed light jets does not vary with R. So, after subtracting light jets from dijet events, the top contribution can be seen in the variation of the number of jets with cone size R. We develop the "hollow cone " idea to tag top pairs. Consider the anti-kt algorithm as a " perfect cone " algorithm. When a larger cone size is used, both a ttbar event and a QCD dijet event will give two jets, when a smaller cone size is used, a ttbar event will have more jets while a QCD dijet event still have two. This means, for a fat jet with a large cone size, after subtracting a jet of small cone size in the interior, if some jets remain in the hollow cone, the jet is likely to be a top jet, and if there is no jet in the hollow cone, it is likely to be a light quark jet or a gluon jet. Our top tagging algorithm proceeds in the following steps, trying to separate top pairs from the QCD dijet events: 1) Reconstruct jets using the anti-kt jet algorithm with R = 1.5 to obtain a set of jets. The number of jets is njets. 2) Redo the jet reconstruction,with R = 0.6 (or R = 0.7), following recent works of ATLAS and CMS , to obtain another set of jets. 3) Keep the event as a tt candidate if n_{jets,R=1.5} = 2 and n_{R=0.6 }>2 . 4) Go into the 2 jets reconstructed in step 1, find all the subjets for each fat jet, for a fat jet of invariant mass of mj , undo the last step of jet clustering to obtain two jets j1 and j2, with invariant masses mj1 and mj2 ( mj1 > mj2 ). If mj1 <0.9 mj , keep both j1 and j2, otherwise, keep only j1 to add to the subjet list and decompose further. Add ji to the jet substructure list if mji < 30 GeV, otherwise decompose ji iteratively. If the total number of subjets is less than 4, reject the event, because a hadronic top and one semileptonic top should give 4 subjets in total, and two hadronic tops will result in 6 subjets. 5) See whether there is a W inside either of the 2 fat jets, if not, reject the event. To do this, look into a fat jet and iterate over all of the 2 subjets configurations. After the jet filtering, if the invariant mass of the 2 subjes falls in the window of 65 GeV to 95 GeV, tag that configuration as a W. 6) See whether either of the two jets has a subjet can be tagged as a b jet. The jet candidates of a W must not be tagged as a b-jet. Keep other b-tagged events. 7)Any event that survives the above sequence is tagged as a tt event. Backgrounds: The main backgrounds are Wbbbar and Z bbbar. Since there will be b jets in both cases, and the Z mass is close to W mass, these two backgrounds are indistinguishable in their hadronic decay channels. Other backgrounds are the QCD dijets from light quarks and gluons and QCD multi-jet events. QCD dijets events will be gotten rid of by the " hollow cone " cut, since the anti-kt jet algorithm is collinear and infrared safe. QCD trijet events will be eliminated by requiring 4 or more subjets. With fake b-jets, Wjj and Zjj events contribute to the background also.we apply the following cuts in sequence : cut 1 : The "hollow cone" sieve. Require njets = 2 and nveto > 2. cut 2 : Total number of subjets >=4. cut 3 : A hadronic W can be tagged. cut 4 : A b jet can be tagged. After the cut, ttbar is 4.05 pb, Wbbbar is down to 0.18, Zbbbar is down to 0.43, Wjj is down to 0.08, Zjj is down to 0.26. The resulting ratio of hadronic tops to semileptonic tops is 2.81, which is consistent with the ratio of decay branching fractions of 3.13. The transverse momentum distribution of tagged top shows that the method is picking top jets instead of light jets, and also demonstrates that top jets with relatively low pt can be tagged. Former top tagging techniques require the pT of the top to be harder than 200 GeV. Speaker: Ms Peisi Huang (Department of Physics-University of Wisconsin-Madison) Search for ttbar Resonances in the Lepton plus Jets Channel in pp Collisions at Sqrt(s)=7 TeV using the ATLAS Detector 20m Several Beyond the Standard Model (BSM) theories predict the existence of new resonances that decay into ttbar pairs. We describe a search for such resonances using lepton plus jet data collected by the ATLAS experiment in pp collisions at Sqrt(s)=7 TeV. The selection criteria and search method are presented. In the absence of signal, we produce 95% CL limits on the production cross section times branching ratio of resonances predicted by a few such BSM models. Speaker: Venkatesh Kaushik (University Of Arizona) Discriminating Top-Antitop Resonances using Azimuthal Decay Correlations 20m Top-antitop pairs produced in the decay of a new heavy resonance will exhibit spin correlations that contain valuable coupling information. When the tops decay, these correlations imprint themselves on the angular patterns of the final quarks and leptons. I will discuss how to probe the structure of a resonance's couplings to tops by measuring the azimuthal angles of the tops' decay products about the production axis. These angles exhibit modulations from helicity interference which are typically O(0.1-1), and which by themselves allow for discrimination of spin-0 from higher spins, measurement of the CP-phase for spin-0, and measurement of the vector/axial composition for spins 1 and 2. For relativistic tops, the azimuthal decay angles can be well-approximated without detailed knowledge of the tops' velocities, and appear to be robust against imperfect energy measurements and neutrino reconstructions. I will illustrate this point in the highly challenging dileptonic decay mode, which also exhibits the largest modulations. I will comment on the relevance of these observables for testing axigluon-like models that explain the top quark $A_{FB}$ anomaly at the Tevatron, through direct production at the LHC. Speaker: Brock Tweedie (Boston University) Measurement of the top pair invariant mass distribution at 7 TeV and search for New Physics 20m We present a measurement of the top-pair mass in tt events by using proton-proton collisions at the LHC at a centre-of-mass energy of 7 TeV. We use data collected with the CMS experiment during the year 2011. The analysis is performed in all possible final states originating from top-pair production, and the full event reconstruction is performed by using different reconstruction methods according to the final state under consideration. The measurement is then used for searching for production of a massive, narrow-width, neutral boson decaying into top-pairs. We observe no significant deviations from the QCD expectations, therefore we translate the measurement into an upper limit on the new physics production cross-section as a function of the particle mass. Speaker: Salvatore Rappoccio (Department of Physics and Astronomy-Johns Hopkins University (J) Measurements of spin correlation in $\boldsymbol{t\bar{t}}$ production at D0 20m We measure the correlation between the spin of the top quark and the spin of the anti-top quark in $t\bar{t} \rightarrow W^{+}bW^{-}\bar{b}$ final states produced in $p\bar{p}$ collisions at a center of mass energy $\sqrt{s}=1.96$~TeV, using data collected with the D0 detector at the Fermilab Tevatron collider. The correlation is extracted using a double differential angular distribution and a novel technique using matrix element integration is used to increase the sensitivity of the result, Measurements are performed in both the dilepton and lepton+jets final states. Speaker: Kenneth Bloom (Department of Physics and Astronomy-University of Nebraska) Convener: Savvas Koushiappas (Brown University) High Energy Gamma-Ray and Neutrino Astrophysics 30m Speaker: John Beacom (Ohio State University) Early Universe and Cosmology 30m Speaker: Mark Trodden (University of Pennsylvania) Dark Matter Searches 30m Speaker: Jonghee Yoo (FNAL) MICE step I: first measurement of emittance with particle physics detectors 30m The muon ionization cooling experiment (MICE) is a strategic R&D project intending to demonstrate the only practical solution to prepare high brilliance beams necessary for a neutrino factory or muon colliders. MICE is under development at the Rutherford Appleton Laboratory (UK). It comprises a dedicated beam line to generate a range of input emittance and momentum, with time-of-flight and Cherenkov detectors to ensure a pure muon beam. The emittance of the incoming beam is measured in the upstream magnetic spectrometer with a sci-fiber tracker. A cooling cell will then follow, alternating energy loss in Li-H absorbers and RF acceleration. A second spectrometer identical to the first and a second muon identification system measure the outgoing emittance. In the 2010 run the beam and most detectors have been fully commissioned and a first measurement of the emittance of a beam with particle physics (time-of-flight) detectors has been performed. The analysis of these data should be completed by the time of the Conference. The next steps of more precise measurements, of emittance and emittance reduction (cooling), that will follow in 2011 and later, will also be outlined. Speaker: Ulisse Bravar Precision Calibration of the Luminosity Measurement in ATLAS 30m A precision luminosity measurement is of critical importance for the ATLAS physics program, both for searches for new physics as well as for precision measurements of Standard Model cross-sections. The calibration of the luminosity is based on three so-called van der Meer scans that were performed in 2010. These scans determine the convolved beam sizes in the vertical and horizontal directions, and together with precise knowledge of the beam currents are used to determine an absolute luminosity scale. Based on this analysis ATLAS has determined the luminosity with a total uncertainty of 3.4% for the 2010 data recorded at root(s) = 7 TeV. Speaker: Eric Torrence (University of Oregon) Novel Accelerator Methods and Technologies for KEKB Upgrade 30m The KEKB B factory is being upgraded to search for physics beyond the Standard Model, with a target luminosity of 8x10^35 cm^-2 s^-1, a factor of 40 times greater than the world record luminosity achieved at KEKB. To achieve this target luminosity the upgraded machine, SuperKEKB, will require the use of new advances in accelerator technology, among them the development of a low-emittance, high-bunch-charge injector system, a high-beam-current vacuum system incorporating the latest electron-cloud mitigation techniques, an interaction region design that provides a low beta function at the collision point while minimizing emittance growth due to fringe fields and maximizing the dynamic aperture, and beam diagnostics and feedback for monitoring and controlling low-emittance beams and their collisions. This talk will discuss the design challenges facing SuperKEKB, and the technologies that are being developed to meet them. Speaker: Dr John Flanagan (KEK) Supersymmetric multiple Higgs doublet models 20m The minimal supersymmetric standard model (MSSM) is extended by the inclusion of an addition pair of constrained Higgs doublet superfields through which the electroweak symmetry breaking is nonlinearly realized. The superpotential coupling to the MSSM Higgs doublet then generates its vacuum expectation value. The resultant Higgs scalars and Higgsino-gaugino mass spectrum is presented for several choices of SUSY breaking and Higgs superpotential mass parameters and the results are contrasted with those of the MSSM. Speaker: Sherwin Love (purdue university) Collider Phenomenology of the E6SSM 20m We consider collider signatures of the exceptional supersymmetric standard model (E6SSM). This model is based on the SM gauge group together with an extra U(1)_{N} gauge symmetry under which right--handed neutrinos have zero charge. To ensure anomaly cancellation and gauge coupling unification the low energy matter content of the E6SSM involve three 27 representations of E_6 and a pair of SU(2) doublets from additional 27 and \bar{27}. Thus E6SSM predicts Z' boson and extra matter beyond the MSSM. In particular, the low--energy spectrum of the E6SSM involves three families of Higgs--like doublets, three families of exotic quarks and three SM singlets that carry U(1)_{N} charges. The E6SSM Higgs sector contains one family of the Higgs--like doublets and one SM singlet that develop vacuum expectation values (VEVs) breaking gauge symmetry. The fermionic and bosonic components of other Higgs--like and singlet superfields form Inert neutralino and chargino states and Inert Higgs states respectively. Two lightest Inert neutralinos tend to be the lightest and next-to-lightest SUSY particles (LSP and NLSP).We analyse the Higgs sector, examine the spectrum and couplings of the Inert neutralinos and charginos and study cosmological implications of the E6SSM. The SM-like Higgs boson can be significantly heavier in the E6SSM than in the MSSM and NMSSM. The model can account for the dark matter relic abundance if the lightest Inert neutralino has mass close to half the Z mass. In this case the SM-like Higgs boson decays more than 95% of the time into either LSPs or NLSPs. This scenario also predicts other light Inert chargino and neutralino states below 200 GeV, and large LSP direct detection cross-sections which is on the edge of observability of XENON100. We also examine the production of the Z' and exotic quarks at the LHC. Since exotic quarks in the E6SSM can be either diquarks or leptoquarks they may provide spectacular new physics signals at the LHC. Speaker: Roman Nevzorov (University of Hawaii) Detecting Fourth Generation Heavy Quarks at the LHC 20m In this talk, I will discuss the production of fourth generation quarks at the LHC. In particular, if such a quark has a mass in the phenomenologically interesting range of 400GeV-600 GeV and decays to a light quark and a W-boson, I will consider a number of possible signals through which it might be detected. In general, the signals I consider include missing momentum together with jets and either a single high-Pt lepton, an opposite sign pair of high-Pt leptons or a same sign pair of high-Pt leptons. In each case I will discuss methods for separating the signal from the three generation Standard Model background. I will show that these methods should allow the detection of heavy fourth generation quarks for a wide range of quark mass and mixing rates. Speaker: Dr David Atwood (Iowa State University) Higgs Production through Top-prime decays at the LHC 20m We explore LHC signatures of vectorlike quarks, which are hypothetical fermions whose left- and right-handed components have the same electroweak quantum numbers. We consider interactions of such a quark, top-prime, with the top quark via a Yukawa coupling and with a bottom quark through a $W$ boson. We look at Higgs production through the decay of the top-prime in a top-prime pair production channel through QCD at the LHC with $\sqrt{s} = $ 7 TeV. Such a process depends only on the top-prime mass. In this channel, we consider semi-leptonic $W$-boson decays. This choice is dictated by the reduction of QCD background and a higher cross section than the corresponding much cleaner di-leptonic channel. We suggest a background discrimination strategy involving $b$-tagging and a lepton in the final state. The possibility of the top-prime decaying into a light (120 GeV/$c^2$) and a relatively heavier Higgs (150 GeV/$c^2$) will be explored. The mixing angle between the top and the top-prime, which is a parameter of the model, has been chosen judiciously so that the analysis remains as model independent as possible. Speaker: Ms Saptaparna Bhattacharya (Brown University) Charged X Production in Simplest Higgs via Gauge Boson Fusion 20m The Simplest Higgs model is one of several Little Higgs extensions that attempt to address the hierchy problem through expansion of the Standard Model weak sector. New charged and neutral weak gauge bosons can cancel out the quadratic Higgs divergences in the Higg's potential through loop diagrams. Production of charged X gauge bosons is studied through gauge boson fusion in pp collisions at 7 through 14 TeV. The pp > qqW±Y>qQX± production method of X± gauge bosons is shown to produce cross-sections that could be large enough to be detectable at the LHC. Here Q represents new heavy quarks that come out of the Simplest Higgs model, and Y is a massive neutral boson. Possible decay products of these gauge bosons are also discussed. Speaker: Mr Matthew Bishara (University of Rochester) First ADS Analysis of B+ --> D0 K Decays in Hadron Collisions 24m We report the first measurement of branching fractions and CP-violating asymmetries of doubly-Cabibbo suppressed B+ --> D0 K decays in hadron collisions, using the approach proposed by Atwood, Dunietz, and Soni (ADS) to determine the CKM angle gamma in 7.0 fb-1 of data. The ADS parameters are determined with accuracy comparable with B factory measurements. Speaker: Paola Garosi Recent BABAR results on CP violation in B decays 24m We report on the study of the decay B+ ->D0(D0bar) K+ where the D0 or D0bar decaying to Kpipi0, with the Atwood Dunietz and Soni (ADS) method. We measure the ratios Rads, R+, and R- that, since the processes B+ -> D0barK+ and B+ -> D0K+ are proportional to Vcb and Vub, respectively, are sensitive to rB and to the weak phase gamma. We also report the results of CP violation studies of B->Dcp pi+pi- and B0->D*D*. Speaker: Dr Romulus Godang (University of Mississippi) Measurements of CP Violation in B Decay at Belle 24m We present the final measurement of time-dependent CP violation in the neutral B decays into charmonium and a K^0 meson with a large data sample containing 772 million B \bar{B} pairs collected at the Upsilon(4S) resonance with with the Belle detector at the KEKB asymmetric-energy e+e- collider. We also reports the first results on CP violation in the radiative penguin decay B^0->K_S phi gamma and B^0->omega K_S gamma, improved measurements of CP violation in B^0->pi^+ pi^- and B^0\to a_1^+- pi^-+ decays as well as new measurements of B decays to rho^0 rho^0 and pairs of charm mesons. Speaker: Himansu Sahoo (University of Hawaii) Updated Measurement of Charmless B Decays at CDF 24m We present world-leading results on CP-violating asymmetries and branching fractions of several decay modes of B0, Bs , and Lambda_b hadrons into charmless two-body final states using 6/fb of data collected by the CDF experiment. A New CP Violating Observable for the LHC 24m We study a new type of CP violating observable that arises in three body decays. We consider decays that are dominated by an intermediate resonance that can go on shell. In many cases, the decay can occur via two different orderings. The required CP-even phase arises due to the different virtualities of the resonance in the two diagrams corresponding to these orderings. This method can be an important tool for accessing new CP phases at the LHC and future colliders. Speaker: Joshua Berger (Cornell University) Computing in HEP 552 A Conveners: Ian Fisk (Fermi National Accelerator Laboratory (FNAL)), Michael Ernst (BNL, ATLAS) An outlook of the user support model to educate the users community at the CMS Experiment 30m The CMS (Compact Muon Solenoid) experiment is one of the two large general-purpose particle physics detectors built at the LHC (Large Hadron Collider) at CERN in Geneva, Switzerland. In order to meet the challenges of designing and building a detector of the technical complexity of CMS, a globally distributed collaboration has been assembled with different backgrounds, expertise, and experience. An international collaboration of nearly 3500 people from nearly 200 institutes in 40 different countries built and now operates this complex detector. The diverse collaboration combined with a highly distributed computing environment and Petabytes/year of data being collected makes CMS unlike any other High Energy Physics collaborations before. This presents new challenges to educate and bring users, coming from different cultural, linguistics and social backgrounds, up to speed to contribute to the physics analysis. CMS has been able to deal with this new paradigm by deploying a user support structure model that uses collaborative tools to educate and reach out its users via a robust software and computing documentation, a series of hands on tutorials per year facilitating the usage of common physics tools, annual hands-on-learning workshops on physics analysis and user feedback to maintain and improve the CMS specific knowledge base. This talk will describe this model that has proved to be successful compared to its predecessors in other HEP experiments where structured user support was missing and the word of mouth or sitting with experts one-on-one was the only way to learn tools to do physics analysis. To carry out the the user support mission worldwide, an LHC Physics Center ( LPC) was created few years back at Fermilab as a hub for US physicists. The LPC serves as a "brick and mortar" location for physics excellence for the CMS physicists where graduate and postgraduate scientists can find experts in all aspects of data analysis and learn via tutorials, workshops, conferences and gatherings. Following the huge success of LPC, a center at CERN itself called LHC Physics Center at CERN ( LPCC) and Terascale Analysis Center at DESY have been created with similar goals. The CMS user support model would also facilitate in making the non-CMS scientific community learn about CMS physics. A good example of this, is the effort by HEP experiments, including CMS, to focus on data preservation efforts. In order to facilitate its use by the future scientific community, who may want to re-visit our data, and re-analyze it, CMS is evaluating the resources required. A detailed, good quality and well maintained documentation by the user support group about the CMS computing and software may go a long way to help in this endeavor. Speaker: Prof. Sudhir Malik (University of Nebraska/FNAL) ATLAS data analysis on Grid 30m In 2010 and 2011 the ATLAS Collaboration at LHC collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present the architecture and general features of the ATLAS distributed analysis system, and discuss the performance of the system and the underlying Grid infrastructure as an analysis platform. We will also discuss future directions in the evolution of ATLAS distributed analysis aimed at improvement of analysis capabilities, robustness and efficiency of the system. Speaker: Dr Sergey Panitkin (Department of Physics - Brookhaven National Laboratory (BNL)) CMS Computing: Performance and Outlook 30m After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for instantaneous luminosity, recording data at 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to larger and larger event sizes and processing times. The CMS computing system has responded admirably to these challenges. We will present the present status of the system, describe the recent performance, and discuss the challenges ahead and how we intend to meet them. Persistent Data Layout and Infrastructure for Efficient Selective Retrieval of Event Data in ATLAS 30m The ATLAS detector [1] at CERN has completed its first full year of recording collisions at 7 TeV, resulting in billions of events and petabytes of data. At these scales, physicists must have the capability to read only the data of interest to their analyses, with the importance of efficient selective access increasing as data taking continues. ATLAS has developed a sophisticated event-level metadata infrastructure (TAG [2]) and supporting I/O framework [3] allowing event selections by explicit specification, by back navigation, and by selection queries to a TAG database via an integrated web interface (iELSSI). These systems and their performance have been reported on elsewhere. The ultimate success of such a system, however, depends significantly upon the efficiency of selective event retrieval. Supporting such retrieval can be challenging, as ATLAS stores its event data in column-wise orientation using ROOT TTrees [4] for a number of reasons, including compression considerations, histogramming use cases, and more. For 2011 data, ATLAS will utilize new capabilities in ROOT to tune the persistent storage layout of event data, and to significantly speed up selective event reading. The new persistent layout strategy and its implications for I/O performance will be presented in this paper. [1] ATLAS Collaboration, ATLAS Detector and Physics Performance Technical Design Report, CERN-LHCC-1999-14 and CERN-LHCC-1999-15. [2] J. Cranshaw et al, "Event selection services in ATLAS", in J. Phys.: Conf. Ser., vol. 219, 042007, 2010 [3] P. van Gemmeren, D. Malon, "The event data store and I/O framework for the ATLAS experiment at the Large Hadron Collider", in IEEE International Conference on Cluster Computing and Workshops, 2009, pp.1-8. [4] R. Brun and F. Rademakers, "ROOT – An Object Oriented Data Analysis Framework", Nucl. Inst. & Meth. in Phys. Rev. A 389 (1997) 81-86. See also http://root.cern.ch Speaker: Peter Van Gemmeren (Argonne National Laboratory) Heavy Flavor Physics: Bs and tau decays 554 Searches for Rare and Forbidden B and Charm Decays with BABAR 20m We present recent BABAR results of searches for rare decays with new physics sensitivity. In particular, we present the results of searches for B -> gammma gamma and the lepton and baryon number violating modes B->Lambda(c)l and B->K/pi tau l. We also describe recent searches for the charm decays D-> Xl+l-, D0 -> gamma gamma and D0->l+l-. Speaker: Alessandro Rossi (INFN Perugia) Bc and Suppressed Bs Decays at CDF 20m We present new results of CDF measurements of Bc and suppressed Bs decays. The first measurement of the Bc lifetime in an exclusive fully-reconstructed final state is reported. An improved measurement of the Bs->DsDs decay is reported along with a measurement of the branching ratio and lifetime for Bs->J/psif0 decays and the first measurement of a CP violating asymmetry in Bs->phiphi decays. Speaker: William Wester (Fermi National Accelerator Lab. (Fermilab)) Updated Measurement of B(Bs -> Ds(*)+Ds(*)-) and Determination of Delta Gamma_CP 20m Using fully reconstructed $B_{s}$ mesons, we measure the branching fractions for the decays of $B_s \to D_s^{(*)+}D_s^{(*)-}$ exclusively. Assuming these decay modes saturate decays to CP-even final states, the branching fraction determines the relative width difference between the $CP$-odd and $CP$-even $B_s$ states. The results are based on a data sample collected with the Belle detector at the $\Upsilon(5S)$ resonance with an integrated luminosity of 122 fb$^{-1}$ at the KEKB asymmetric-energy $e^+ e^-$ collider. Speaker: Ms Sevda Esen (University of Cincinnati) Measurement of the relative branching fraction of Bs -> J/psi f_0(980), f_0(980) -> pi+pi- to Bs-> J/psi phi, phi -> K+K- 19m A measurement of the relative branching fraction of $B^0_s\to J/\psi f_0(980)$, $f_0(980)\to\pi^+\pi^-$ to $B^0_s\to J/\psi \phi$, $\phi\to K^+K^-$ is presented. The decay mode $B^0_s\to J/\psi f_0(980)$ is an interesting mode since it is a CP eigenstate and allows the measurement of the CP-violating phase $\phi_s$. Using approximately 8 fb$^{-1}$ of data recorded with the D0 detector at the Fermilab Tevatron Collider, a relative branching fraction of $0.210\pm0.032 (\text{stat}) \pm 0.036 (\text{syst})$ is found. Speaker: Braden Keim Abbott (Department of Physics and Astronomy-University of Oklahoma) Three-Pion Decays of the tau Lepton, the a_1(1260) Properties, and the a_1-rho-pi Lagrangian 20m We show that the a_1-rho-pi Lagrangian is a decisive element for obtaining a good phenomenological description of the three-pion decays of the tau lepton. We choose it in a two-component form with a flexible mixing parameter sin(theta). In addition to the dominant a_1-> rho+pi intermediate states, the a_1-> sigma+pi ones are included. When fitting the three-pion mass spectra, three data sets are explored: (1) ALEPH 2005 pi-pi-pi+ data, (2) ALEPH 2005 pi-pi0pi0 data, and (3) previous two sets combined together and supplemented with the ARGUS 1993, OPAL 1997, and CLEO 2000 data. The corresponding confidence levels are (1) 28.3%, (2) 100%, and (3)7.7%. After the inclusion of the a_1(1640) resonance, the agreement of the model with data greatly improves and the confidence level reaches 100% for each of the three data sets. From the fit to all five experiments [data set (3)] the following parameters of the a_1(1260) are obtained m_{a_1}=(1233+/-18) MeV, Gamma_{a_1}=(431+/-20) MeV. The optimal value of the Lagrangian mixing parameter sin(theta)=0.459+/-0.004 agrees with the value obtained recently from the e+e- annihilation into four pions. Speaker: Prof. Peter Lichard (Silesian University in Opava) Gravitational collapse and far from equilibrium dynamics in holographic gauge theories 30m In recent years holography has emerged as a powerful tool to study non-equilibrium phenomena in certain quantum theories, mapping challenging quantum dynamics onto the classical dynamics of gravitational fields in one higher dimension. One interesting process accessible with holography is the formation of a quark-gluon plasma in strongly coupled non-Abelian gauge theories. In the dual gravitational description, the formation of a quark-gluon plasma maps onto the process of gravitational collapse and black hole formation. I will describe how one can use techniques from numerical relativity to study this process. Speaker: Paul Chesler (MIT) Monte-Carlo simulation of jets in heavy-ion collisions 30m I present recent developments in simulating heavy-ion collisions using a Monte-Carlo event-generator to study high momentum probes. The simulation contains medium effects on the hard probes via the elastic and radiative energy loss and momentum broadening. The lower momentum bulk medium is simulated using relativistic hydrodynamics. Apart from inclusive observables such as the nuclear modification factor, I present results for the dijet asymmetry measured at the Large Hadron Collider, employing state of the art jet reconstruction methods. I demonstrate that Monte-Carlo simulations are an essential tool for connecting fundamental theory to experiments and extracting important information about the properties of the medium created in heavy-ion collisions and its interactions. Speaker: Dr Bjoern Schenke (BNL) Effective theory for jets in medium 30m We revisit the jet broadening and radiative energy loss problems in heavy ion collisions from effective theory point of view. Soft collinear effective theory (SCET) describes the dynamics of QCD at high energies and is particularly suitable for calculations involving jets. By modifying its Lagrangian to include medium interactions we develop an effective theory for jets in medium. A number of issues are addresses in this new language. We demonstrate the gauge invariance of results for jet broadening and radiative energy loss. We show how the cross-section for radiative corrections to jet production factorizes for QCD hard processes. We include the effect of the nuclear recoil in the medium and quantify it for RHIC and LHC energies. Also we calculate the radiative energy loss beyond the conventional soft gluon approximation, extending the previous results to large $x$ values. We discuss the phenomenological applications for RHIC and LHC. Speaker: Dr grigory ovanesyan (LANL) Transverse Momentum Broadening in Weakly Coupled Quark-Gluon Plasma 25m Jet quenching parameter or, equivalently, transverse momentum broadening distribution function is an important quantity which helps to understand energy losses in heavy ion collisions and get insights into properties of the de-confined quark-gluon plasma. SCET provides framework to calculate jet quenching parameter at weak coupling using expectation value of two space-like separated light-like Wilson lines which can be evaluated for desired medium. In this work we evaluate transverse momentum broadening distribution function for the quark-gluon plasma in thermal equilibrium using Hard Thermal Loop (HTL) resummed effective thermal field theory and estimate corrections to this approximations. Speaker: Mr Mindaugas Lekaveckas (MIT) Review of recent theoretical developments in Higgs physics 30m The search for the SM Higgs boson is reaching a critical juncture. Either evidence will be found or an exclusion will be set during the upcoming LHC year of running. We review the recent experimental results and the theoretical progress that has enabled the search for the Higgs. Speaker: Radja Boughezal (Argonne National Lab.) The Combination of Higgs Searches with the ATLAS Detector 20m Upper limits on the cross section of Standard Model Higgs boson production at the Large Hadron Collider (LHC) running at a centre-of-mass energy of 7 TeV are determined, based on the searches performed by the ATLAS Collaboration. Models with a fourth generation of heavy leptons and quarks with Standard Model-like couplings to the Higgs boson are also investigated. Searches For The Higgs Boson With The CMS Detector 20m We report on the various SM and BSM Higgs Boson searches conducted by the CMS experiment with the data accumulated during the 2010 & 2011 running of the LHC at sqrt(s) = 7 TeV. Speaker: Dr Marat Gataullin (Charles C. Lauritsen Lab. of HEPhys-California Institute of Tec) Combined upper limits on Higgs boson production in the Standard Model, fourth generation and fermiophobic models in proton-antiproton collisions at 1.96 TeV at the Tevatron 35m The combined results from CDF and D0 on direct searches for the standard model (SM) Higgs boson H in ppbar collisions at the Fermilab Tevatron at sqrt(s)=1.96 TeV are presented. Compared to the previous Tevatron Higgs search combination more data have been added, additional new channels have been incorporated, and some previously used channels have been reanalyzed to gain sensitivity. We use the latest parton distribution functions and gluon fusion to Higgs theoretical cross sections when comparing our limits to the SM predictions. In addition to limits on the SM, the results are interpreted in the context of a fermiophobic model in which the diphoton and WW final states are enhanced and also in the context of a model in which the gluon fusion production mode is enhanced by the existence of a fourth generation of fermions. With up to 8.0 fb-1 of data analyzed at CDF and D0, the 95% C.L. upper limits on Higgs boson are calculated. Speaker: Richard Edward Hughes (Ohio State University) Neutrino Physics: Chaired by John Beacom 550 Neutrino Physics with SciBooNE 20m SciBooNE (FNAL E954) is designed to measure precise neutrino cross sections on carbon in the one GeV region. Moreover, SciBooNE can serve as a near detector for MiniBooNE neutrino and antineutrino oscillation searches. I will present SciBooNE's most recent results on neutrino cross section measurements and the search for neutrino disappearance with MiniBooNE. Speaker: Dr Morgan Wascko (Imperial College London) Neutrino interactions in the NOvA near detector prototype 20m The NuMI Off-Axis electron neutrino Appearance (NOvA) experiment has started taking data with the 209 ton liquid scintillator-filled prototype of the near detector in the end of November 2010. This detector collects data from two sources, the Main Injector complex and from the Booster Neutrino Beam. At the location of the prototype detector due to the off-axis effect the NuMI beam is narrow with maximum around 2GeV. On the other hand the detector is on axis of the BNB beam and sees its maximum around 1GeV. This configuration gives the NOvA experiment a unique opportunity of studying neutrino and anti-neutrino interactions with carbon target from two low energy beams. I will present physics program for the NOvA experiment focusing on the cross section measurements and preliminary data obtained with the near detector prototype. Speaker: Dr Jaroslaw Nowak (University of Minnesota) Early Neutrino Data in the NOvA Near Detector 20m NOvA is a long baseline neutrino experiment using an off-axis neutrino beam produced by the NuMI neutrino beam at Fermilab. The NOVA experiment will study neutrino oscillations from $\nu_{\mu}$ flavor to $\nu_e$ flavor. A short term goal for the NOvA experiment is to develop a good understanding of the response of the detector. These studies are being carried out with the full Near Detector installed on the surface at Fermilab (NDOS). This detector is currently running and will acquire neutrino data for a year. Using beam muon neutrino data, quasi-elastic charge-current interactions will be studied. Status of the NDOS running and early data will be shown. Speaker: Minerba Betancourt (University of Minnesota) SciNOvA: A measurement of neutrino-nucleus scattering in a narrow-band beam 20m SciNOvA is a proposed experiment to deploy a fine-grained scintillator detector in front of the NOvA near detector to collect neutrino-nucleus scattering events in the NUMI, off-axis, narrow-band neutrino beam at Fermilab. This detector can make unique contributions to the measurement of charged- and neutral-current quasi-elastic scattering; and neutral-current $\pi^0$ and photon production. These processes are important to understand for fundamental physics and as backgrounds to measurements of electron neutrino appearance oscillations. The talk will present the strategy and science case of the SciNOvA experiment. Speaker: Xinchun Tian (Univesrity of South Carolina) Lorentz noninvariant neutrino oscillations without neutrino masses 20m The bicycle model of Lorentz noninvariant neutrino oscillations without neutrino masses naturally predicts maximal mixing and a 1/E dependence of the oscillation argument for muon-neutrino to tau-neutrino oscillations of atmospheric and long-baseline neutrinos, but cannot also simultaneously fit the data for solar neutrinos and KamLAND. We search for other possible structures of the effective Hamiltonian for Lorentz noninvariant oscillations without neutrino mass that naturally have 1/E dependence at high neutrino energy. Due to the lack of any evidence for direction dependence, we consider only direction-independent models. Although a number of models are found with 1/E dependence for atmospheric and long-baseline neutrinos, none can also simultaneously fit solar, reactor and short-baseline neutrino data. Speaker: Prof. Kerry Whisnant (Iowa State University) Three-Parameter Lorentz-Violating Model for Neutrino Oscillations 20m A three-parameter model of neutrino oscillations is presented. It is based on a simple Lorentz- and CPT-violating texture and is consistent with compelling oscillatory signals and null tests involving atmospheric, accelerator, reactor, and solar neutrinos. The solar and atmospheric mixing angles are fixed by the texture at both low and high energies instead of being independent parameters as in most descriptions. One natural feature of the model is anomalous appearance signals in MiniBooNE at low energies, consistent with recent observations for both neutrinos and antineutrinos. Simple texture-preserving extensions of the model can accommodate the recent MINOS anomaly and the LSND signal. Speaker: Jorge S Diaz (Indiana University) Particle Astrophysics and Cosmology: Chaired by Andrew Zentner 556 The Dark Energy Survey Data Management System 20m The Dark Energy Survey (DES) is a project with the goal of building, installing and exploiting a new 70 CCD-camera at the Blanco telescope, in order to study the nature of cosmic acceleration. It will cover 5000 square degrees of the southern hemisphere sky and will record the positions and shapes of 300 million galaxies up to redshift 1.4. The survey will be completed using 525 nights during a 5-year period starting in 2012. About O(1 TB) of raw data will be produced every night, including science and calibration images. The DES data management system has been developed for the processing, calibration and archiving of these data. It is being developed by collaborating DES institutions, led by NCSA. In this contribution, we detail how a high performance computing environment is the best choice for this task, what kind of scientific codes are involved and how the Data Challenge process works, to improve simultaneously the Data Management system algorithms and the Science Working Group analysis codes. Speaker: Mr Ignacio Sevilla (CIEMAT) Studying Cosmic Acceleration with the Dark Energy Survey 20m The Dark Energy Survey (DES) will use a new massive imaging instrument, the Dark Energy Camera (DECam), to study the properties of the mysterious, presently-dominant source of energy that is causing the universe to go through an accelerating expansion. The camera will be installed on the 4-meter Blanco telescope at the Cerro Tololo Inter-American Observatory and commissioning is expected to start in the end of 2011. Over five years, DES will carry out a high-precision photometric survey of 5000 square degrees to detect and study the properties of over 300 million galaxies in the southern sky. Repeat observations of a smaller patch in the sky will discover thousands of Type Ia supernovae for precision distance measurements. We will describe how the four complementary probes of dark energy -- weak lensing, galaxy clusters, baryon acoustic oscillations, and supernova -- will help improve our understanding of the nature of the mysterious dark energy. Speaker: Masao Sako (University of Pennsylvania) The Dark Energy Camera - A new Instrument for the Dark Energy Survey 20m The Dark Energy Survey (DES) is a next generation optical survey aimed at understanding the expansion rate of the universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type Ia supernovae. To perform the survey, the DES Collaboration is building the Dark Energy Camera (DECam), a 3 square degree, 520 Megapixel CCD camera which will be mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. The survey will cover 5000 square-degrees of the southern galactic cap with 5 filters (g, r, i, z, Y). DECam will be comprised of 74 250 micron thick fully depleted CCDs: 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. Construction of DECam is nearing completion. In order to verify that the camera meets technical specifications for the Dark Energy Survey and to reduce the time required to commission the instrument on the telescope, we have constructed a full sized ``Telescope Simulator'' and are performing full system testing and integration prior to shipping to CTIO. An overview of the DECam design and the status of the construction and integration tests will be presented Speaker: Dr Jiangang Hao (Fermilab) Ultra High Energy Cosmology with the POLARBEAR Telescope 20m Recent studies of the temperature anisotropy of the Cosmic Microwave Background (CMB) lend support to an inflationary origin of the universe, yet no direct evidence verifying inflation exists. Current generation experiments now focus on the polarization anisotropy in the CMB, specifically the curl component of the CMB's polarization (called the "B-mode"), which is undetected to date. The theory of inflation predicts the existence of a primordial gravitational wave background that imprints a unique signature on the polarization B-mode at large angular scales. The CMB B-mode signal also encodes gravitational lensing information at smaller angular scales, which bears the imprint of large scale cosmological structures. The quest for detection of these signals; each of which is orders of magnitude smaller than the CMB temperature, has motivated the development of background-limited detectors with precise control of systematic effects. The POLARBEAR experiment is designed to perform a deep search for the signature of gravitational waves from inflation and to characterize lensing of the CMB by large-scale structure. POLARBEAR is a 3.5 meter ground-based telescope with four arc-minute angular resolution at 150 GHz. At the heart of the POLARBEAR's receiver is an array featuring 1274 antenna-coupled superconducting transition edge sensor bolometers (TESB) cooled to 0.25 Kelvin. POLARBEAR is designed to reach a tensor-to-scalar ratio of 0.025 after two years of observation -- more than an order of magnitude improvement over the current best results, which would test physics at energies near the GUT scale. POLARBEAR had an engineering run at Cedar Flat, California in 2010 and will begin observations in the Atacama Desert in Chile in 2011. Speaker: Ms Stephanie Moyerman (UCSD) Probing Cosmology and Particle Physics with ACT 20m Over the coming decade, tiny fluctuations in temperature and polarization of the Cosmic Microwave Background (CMB) will be mapped with unprecedented resolution. The Planck Surveyor, the Atacama Cosmology Telescope (ACT), and the South Pole Telescope (SPT) are already making great advances. In a few years, high resolution polarization experiments, such as PolarBear, ACTPol, and SPTPol will be in full swing. While these new arc-minute resolution observations will continue to help constrain the physics of the early universe, they will also be unique in a new way - they will allow us to measure the gravitational lensing of the CMB. This lensing is the deflection of CMB photons by intervening large scale structure. CMB lensing will probe the growth of structure over cosmic time, helping constrain the total mass of neutrinos and the behavior of dark energy. In the first part of the talk, I will review the recent progress made with ACT, especially in constraining the physics of Big Bang Nucleosynthesis and the neutrino sector. In the second part, I will discuss the scientific potential of the CMB lensing signal, its first detection, a new way to constrain dark energy, and its prospects for cross-correlation with other datasets. Finally, I will discuss the upcoming polarized counterpart of ACT --- the ACTPol project, which will have greater sensitivity than ACT, and will be a premier CMB lensing experiment. I will describe our plans to extract different flavors of science from the ACTPol data, including the cross-correlations with optical lensing and galaxy surveys, such as SDSS, BOSS, DES and LSST. Speaker: Sudeep Das
CommonCrawl
npj computational materials Analyzing machine learning models to accelerate generation of fundamental materials insights Mitsutaro Umehara ORCID: orcid.org/0000-0001-8665-00281,2, Helge S. Stein1, Dan Guevarra1, Paul F. Newhouse1, David A. Boyd1 & John M. Gregoire ORCID: orcid.org/0000-0002-2863-52651 npj Computational Materials volume 5, Article number: 34 (2019) Cite this article Computational methods Machine learning for materials science envisions the acceleration of basic science research through automated identification of key data relationships to augment human interpretation and gain scientific understanding. A primary role of scientists is extraction of fundamental knowledge from data, and we demonstrate that this extraction can be accelerated using neural networks via analysis of the trained data model itself rather than its application as a prediction tool. Convolutional neural networks excel at modeling complex data relationships in multi-dimensional parameter spaces, such as that mapped by a combinatorial materials science experiment. Measuring a performance metric in a given materials space provides direct information about (locally) optimal materials but not the underlying materials science that gives rise to the variation in performance. By building a model that predicts performance (in this case photoelectrochemical power generation of a solar fuels photoanode) from materials parameters (in this case composition and Raman signal), subsequent analysis of gradients in the trained model reveals key data relationships that are not readily identified by human inspection or traditional statistical analyses. Human interpretation of these key relationships produces the desired fundamental understanding, demonstrating a framework in which machine learning accelerates data interpretation by leveraging the expertize of the human scientist. We also demonstrate the use of neural network gradient analysis to automate prediction of the directions in parameter space, such as the addition of specific alloying elements, that may increase performance by moving beyond the confines of existing data. Machine learning has transformed several research fields1,2,3,4,5,6 and is increasingly being integrated into material science research.7,8,9,10,11,12,13,14,15,16,17 Motivated by the pervasive need to design functional materials for a variety of technologies, the machine learning models for materials science have primarily focused on establishment of prediction tools.7,8,12,13,14 A complementary effort in data science for materials involves knowledge extraction from large datasets to advance understanding of the present data.10,18,19 This strategy can be employed globally, as exemplified by the recent modeling of all known materials phases to generate classifications of the elements akin to the periodic table,20 or locally to reveal the fundamental properties of a given materials system. For materials systems with low-dimensional parameter spaces, composition-property relationships can be directly mapped and represent the understanding of the underlying materials science.10,21 Composition-processing parameter spaces are often high dimensional, posing challenges for both experimental exploration of the spaces and the interpretation of the resulting data. Machine learning models such as neural networks excel at modeling complex data relationships but generally do not directly provide fundamental scientific insights, motivating our effort in the present work to analyze the models themselves to identify composition-property and composition-structure-property relationships that lead to fundamental materials insights. The field of combinatorial materials science comprises an experimental strategy for materials exploration and establishment of composition-structure-property relationships via systematic exploration of high-dimensional materials parameter spaces.18,22,23 High-throughput experimentation can be used to accelerate such materials exploration23,24,25,26,27,28 and enables generation of sufficiently large datasets to utilize modern machine learning algorithms. The dataset in the present work was generated using high-throughput synthesis, structural characterization, and photoelectrochemical performance mapping of BiVO4-based photoanodes29,30 as a function of composition in Bi-V-A and Bi-V-A-B compositions spaces where A and B are chosen from a set of five alloying elements. Previous manual analysis and use of materials theory provided several scientific insights in this materials system, raising the question of whether the data-to-insights process can be accelerated via machine learning. To explore that concept, we start by modeling of how raw composition and structural data relate to performance using a convolutional neural network (CNN). CNNs have been deployed in material science for tasks such as image recognition31,32,33,34 and property prediction.20,35 Analysis of gradients of the CNN model, which quantify how the predicted property varies with respect to each input dimension, can serve as a measure of the importance of each input dimension and can be further analyzed to interpret the data model,36,37,38 which is one approach to the broader effort of improving interpretability in machine learning.39,40,41 This approach has been used in materials science for classifying regions of micrographs based on their contribution to ionic conductivity,34 and we demonstrate that CNN gradient analysis can provide a general framework for data interpretation and even automate the identification of composition-structure-property relationships in high-dimensional materials spaces. We demonstrate the use of CNN-computer gradients to visualize data trends, both locally in composition space and as a global representation of high-dimensional data relationships, which, in addition to aiding human understanding of the data, can provide guidance for design of new high performance materials. We then demonstrate automated identification and communication of composition-property and composition-structure-property relationships, a compact representation of the data relationships that need to be studied to attain a fundamental understanding of the underlying materials science. With this strategy, the machine learning algorithm accelerates science by directing the scientists to data relationships that are emblematic of the fundamental materials science. Neural network gradient analysis The multi-dimensional dataset for CNN training was assembled from the high-throughput measurement of the PEC power density (P) and Raman signal for a series of BiVO4 alloys, using methods described in detail previously.29,30 The map of P over the library of samples is shown in Fig. 1a, and select Raman spectra are shown in Fig. 1b. The dataset was compiled from the set of samples comprising Bi1−xVxO2+δ compositions with x = 0.48, 0.5, and 0.52; for each of these Bi:V stoichiometries, the dataset also included a series of alloys with 5 alloying elements (Mo, W, Dy, Gd, and Tb), as well as each of the 10 pairwise combinations of these alloying elements. The 5 single-alloy spaces include 10 alloy compositions up to approximately 8 at.% and 5 duplicate samples of each of these compositions. The 10 co-alloy spaces include 17 unique co-alloy concentrations with combined alloy concentration between approximately 2 at.% and 8 at.%. For each of the 1379 samples, the feature vector Xj of jth sample is the concatenation of the Bi-V-Mo-W-Dy-Gd-Tb composition and the normalized Raman spectrum. The dataset X is a 1022 × 1379 array where rows 1 to 7 are the composition dimensions (abbreviated Xcomp) and the remaining rows are the Raman spectrum dimensions (abbreviated Xspec), which are collectively used to train the CNN model of Fig. 2 to predict the PEC power density P from any coordinate in the M-dimensional parameter space: $$\begin{array}{l}{\vec{\boldsymbol x}} = (x_1,x_2, \cdots ,x_M)\\ P_j = f^{\left( n \right)}\left( {{\vec{\boldsymbol x}}} \right)|_{{\vec{\boldsymbol x}}\, =\, {\boldsymbol{X}}_j}\end{array}$$ where n is the model index corresponding to eight independent trainings from randomly-generated initializations of the CNN. Analysis is performed on the collection of these independently trained models to help ensure that the interpreted data relationships originate from the data itself and not the initialization of the CNN. While the Introduction motivates the use of a CNN model with regards to its established role in materials science, we additionally note that the gradient analysis functionality in the Keras42 makes it a practical choice for the present work. The use of gradient analysis as opposed to a prediction tool makes the results less sensitive to the detailed structure of the CNN as the model needs only to be sufficiently expressive to model the relationships in the data, and we discuss in the SI the considerations that led to the specific structure shown in Fig. 2, as well as the predictive power of the CNN model. a The map of measured photoelectrochemical power generation for the 1379 photoanode samples. Each sample is ~1 mm2 and arranged on a 2 mm grid. b Representative Raman signals, all normalized by the maximum intensity, for each of the 16 composition spaces in the dataset Schematic of CNN model structure. The model takes the Raman spectrum and the composition as input to predict P. The differently colored layers correspond to red: dense layers acting on composition, green: convolutional 1D layers acting on spectra, yellow: flattening and concatenation layers, blue: dense layers acting on both the composition and spectral data. Each of the 10 layers of the CNN model are labelled a to j While a given model could be used to predict the performance of other compositions and/or Raman patterns, we instead explore the model itself through analysis of the gradients in performance with respect to each feature vector dimension. These gradients are readily evaluated at all feature vector positions, yielding an array of gradients akin to the partial derivative in the model for P with respect to the ith dimension of the feature vector and evaluated at sample j: $$G_{i,j}^{(n)} = \left. {\frac{{\partial f^{\left( n \right)}\left( {{\vec{\boldsymbol x}}} \right)}}{{\partial x_i}}} \right|_{{\vec{\boldsymbol x}}\, =\, {\boldsymbol{X}}_j}$$ For the position in composition-Raman space corresponding to a given sample, this gradient provides the model prediction for how P will be impacted by a change in any composition variable or the intensity at any position in the Raman spectrum.36,37,38 Local gradient analysis and moving beyond the existing data To illustrate the gradient analysis of individual samples, we commence with a plot of the sample composition and Raman spectrum along with the respective model gradients for a Bi0.5V0.5Tb0.014 sample with P = 0.008 mW cm−2, a very poor PEC performance (Fig. 3). For this sample, the range of gradient values obtained over the 8 model trainings is shown for each feature vector dimension. For the composition dimensions, the largest gradients are observed for Mo and W where the addition of 1 at.% of either of these elements is predicted to provide a large increase in P, which is commensurate with our extensive manual analysis of the data that identified inclusion of an electron donor (Mo or W) as the most important strategy for optimizing performance.29,30 The gradient analysis also indicates a benefit from increasing the Bi:V ratio and increasing the concentration of any of the rare earth elements. With regards to the Raman signal, the region with largest gradients is the 340–360 cm−1 region where a doublet peak exists in the measured signal and the gradient analysis indicates that improved P can be obtained by increasing the intensity between the 2 peaks and decreasing the intensity on the outer shoulders of the doublet peak, which is akin to lowering the splitting between the 2 peaks.29,43,44 This is precisely the discovery featured in a previous publication wherein we identified that a lowered m-BiVO4 distortion, which is manifested in the Raman signal by a lowered splitting of these peaks, leads to improved PEC performance.29,30 a Composition of a poor-performing sample (bar plot) and corresponding gradients for the composition dimensions (arrows), where the legend provides the relationship between arrow length and gradient magnitude, and up and down arrows indicate positive and negative gradients, respectively. The green error bar for each arrow indicates the standard deviation of the respective gradient over the 8 independent models. b Since the Raman pattern has too many dimensions to create the same arrow representation of gradients, the plot of the Raman pattern is colored by the average gradient. c The average gradient is also plotted (black) with the wavenumber-specific standard deviation over the 8 models (blue) Continuing with analysis of individual samples, we turn to visualization of a sample with a high PEC power density. The gradient analysis of the Bi0.5V0.5Gd0.024Mo0.057 sample with P = 3.2 mW cm−2 is shown in Fig. 4. While this sample is locally optimal with respect to its composition neighbors in this library, the nonzero gradients for this sample suggest that the global maximum lies beyond the extent of the present dataset, which is important from a materials design perspective as it provides guidance in the form of the direction in parameter space to modify the best samples to obtain an even higher performance material. With respect to composition, this sample has the highest Bi:V out of the three values in the dataset, and the model indicates further increase of this ratio would be beneficial. Concerning the alloying elements, the gradients indicate that higher rare earth concentrations would be beneficial and higher W concentration would be deleterious. The directions in parameter space of other samples are illustrated in Figure S1. Regarding the gradients in the Raman spectrum, in the 340–360 cm−1 region the variation in gradient with wavenumber is similar to that of Fig. 3, but with smaller magnitude due to the nearly-complete merging of the doublet peak in this sample. The Raman feature with negative gradient on the shoulder of the main peak, near 715 cm−1, is commensurate with increasing P by lowering the monoclinic distortion, as this peak is the antisymmetric stretching mode of V-O bond that decreases in intensity as the monoclinic distortion vanishes to yield the tetragonal scheelite polymorph.43,44 The large positive gradients in the 400–600 cm−1 range don't correspond to any detected features in the Raman patterns and are thus not immediately interpretable. a Composition of the highest performance sample (bar plot) and corresponding gradients for the composition dimensions (arrows), b Raman spectrum of the highest performance sample with heat map of gradient, and c averaged gradient (black solid line) and its standard deviation (blue filled region), in similar format to Fig. 3 Gradient ensemble visualization While Figs. 3 and 4 demonstrate gradient analysis of single samples, the ensemble of gradients from all sample provide additional insights into the most pertinent data relationships for understanding the underlying materials science. The gradients for each input dimension and sample are first averaged over the eight independently trained models, enabling analysis of the distribution of gradients for each dimension of X as shown in Fig. 5a, b. There is considerable variation in the gradients for each composition dimension, and Mo and W gradients exhibit bimodal distributions, indicating that analysis of the average variation of P with any composition dimension will not sufficiently characterize the data relationships. For comparison, three different scalar metrics for the relationship of P to each dimension of Xcomp are provided in Fig. 5c: the feature importance for a random forest regression model (FI) trained with the same input data as the CNN, the maximal information coefficient (MIC), and the Pearson correlation coefficient. While all three of these metrics provide alternate perspectives on the data relationships, only the CNN gradient analysis is commensurate with the established conclusions regarding the elemental concentrations, which include the following composition design rules (in decreasing order of importance for maximizing P) and corresponding observations from Fig. 5a: (i) W or Mo should be included to increase electrical conductivity; the composition dimensions for the elements have the highest average gradient as well as gradient distributions that extend to the highest values. (ii) Once electronic conductivity is no longer limiting performance, adding a rare earth element improves P by increasing hole transport via crystal structure modulation; the three REs have near zero gradient for many samples but their distributions each extend to high values. (iii) Depending on the alloying elements, the highest P is observed with 1:1 Bi:V or the Bi-rich variant; the gradients for both Bi and V are mostly near 0 with a small distribution at positive values for Bi and negative values for V. While the CNN gradient analysis is commensurate with the established scientific interpretations, it is important to note that the details of these scientific interpretations cannot be derived from the gradient analysis. Instead, this summary of gradients provides a compact visualization of the data relationships for scientists to inspect and interpret. a Gradients of each elements with violin plots showing the distribution of values over all samples and bar plot showing the average value of each of these distributions. b Averaged Raman signal colored by the sample-averaged gradients (top panel), and the sample-averaged gradients are also plotted in the bottom panel (black line) with the respective ±1 standard deviation (green area) representing variation over the sampled parameter space. c, d The relationship between P and each composition (c) and spectrum (d) dimension of the source data, as quantified by Random Forest feature importance (Rand.For. FI), Maximal information coefficient (MIC), and Pearson correlation coefficient (Pearson) Figure 5b, d contains a similar set of analyses for the Raman spectra, with the large dimensionality of Xspec hindering visualization of the full gradient distributions, prompting our visualization of the variation in gradients by plotting the green shaded region corresponding to the ±1 standard deviation in Fig. 5b. The main peak-like patterns in this sample-averaged gradient signal draws correspond to the doublet peak in the 340–360 cm−1 region where, as discussed above, the positive gradient between the pair of peaks and the negative gradient on the outer shoulders of each peak corresponds to the improvement in P with merging of the doublet peak. This gradient analysis would have greatly accelerated the identification of the corresponding structural modulation that provides the PEC improvement, which was only identified after considerable manual inspection including the development of custom analysis algorithms for identifying the data relationships. This type of guidance is not forthcoming from the three scalar-based assessments (Rand.For. FI, MIC, Pearson) of the Xspec-P relationships (Fig. 5d), which each direct primary attention to the most intense Raman peak. Gradient correlation analysis and automated detection of composition-structure-property relationships While Fig. 5a, b demonstrate the utility of the gradient analysis for generating compact, human-readable summaries of high-dimensional data, there is no clear way to automate interpretation of these visualizations. Given the importance of composition-structure-property relationships in elucidating the fundamental origins of an observed variation in a property (in this case P), we focus the automation of data interpretation via gradient analysis on reporting composition-structure-property relationships. Correlation analysis of gradients from different features (dimensions of X) quantifies the extent to which these features similarly impact P. Performing this correlation analysis is facilitated by the ability to evaluate the gradient with respect to each feature dimension at any coordinate in the feature space. Since the grid of compositions in the dataset is based on a 6-dimensional composition space that is only explored in up to three dimensions at a time, this set of samples is not conducive to direct calculation of local partial derivatives for each input dimension. We performed the correlation analysis by calculating the correlation matrix (similar to covariance matrix with every value being a Pearson correlation coefficients of the respective pair of features) for each of the eight independently trained models and then averaging over the eight models. For this analysis, the V concentration dimension was ignored since the design of the composition library involves three different Bi:V values and thus V concentration is nearly linearly related to that of Bi, obscuring separate analysis of the covariance of these dimensions with any other dimension of X. Pairwise plots of the gradients \({\boldsymbol{G}}_{i,j}^{({\mathrm{n}})}\) and the correlation coefficients (averaged over the eight models) are shown in Fig. 6 for the set of six elements. Analysis of these correlation coefficients reveals sets of elements that similarly impact P. That is, from the collection of samples in the high-dimensional composition space, the model-predicted change in P with increasing concentration is correlated for elements whose functional role is similar. This correlation doesn't necessarily relate to similarity of the elements, only their similar alteration of the property of interest. Pairwise correlation analysis of gradients for 6 composition dimensions of the input data. V is excluded due to its inherent inverse correlation with Bi, and each data point in the bottom-left correlation plots represents the pair of gradients for single sample over the 8 models. Each plot on the diagonal is the histogram of gradients for the respective element, and the numbers in each box in the upper-right potion of the figure show the Pearson correlation coefficient averaged over 8 models for the respective correlation plot (correlation coefficient of gradients over the sample set) To automate identification and communication of these sets of elements with similar composition-property relationships, we choose a threshold correlation value (0.9 in this case) and find all sets of elements for which every pairwise correlation coefficient exceeds the threshold. Using the data in Fig. 6, the resulting sets are {Dy,Gd,Tb} and {W,Mo}. To provide some intuitive explanation of how the CNN encoded these commonalities, Figure S2 shows the activations of the seven dimensions of Xcomp in the first neural network layer (Fig. 2e), revealing that through training of the model, the best reconstructions of the P data were found by activating the TMs similarly and the REs similarly in this first layer, resulting in similar functional modeling for the TMs and for the REs, which produces the observed correlations in the gradients. This pattern of activations is the model's "learning" of the similar composition-property relationships. For each of these sets, we next automatically identify features in the Raman spectra that can elucidate composition-structure-property relationships. For the present work, we do not explore all composition-structure relationships, only those related to improving P. If the improvement in P upon increasing an elemental concentration is related to a structural feature in the Raman spectra, then the dimensions of X corresponding to the structural feature will have gradients correlated with the concentration gradient, and this correlation coefficient could be positive or negative depending on whether the given Raman mode is increasing or decreasing in intensity or shifting to a different wavenumber. To automate detection of such relationships, for each of the element sets ({Dy,Gd,Tb} and {W,Mo} in this case), we identify each dimension of Xspec whose gradient correlation coefficient with respect to each element in the set exceeds a threshold value (absolute value above 0.3 in this case). Subsequent identification of the contiguous ranges of wavenumbers that meet this criterion produces the list of Raman feature locations that represent the composition-structure-property relationships. An automated report summary of these findings is illustrated in Table 1 where a list of 16 observations, each identifying a composition-property relationship or a Raman spectral region related to such a relationship, guide human investigation of the materials science. Where the human-derived materials science explanation for a given data observation has been identified or hypothesized, the materials science explanation is also summarized in the table. The observations commence with a report of the three elements whose gradients are most positive among the samples exhibiting highest P (in this case, above the 95th percentile of P), and while the table provides classification of data relationships, we note that quantification of the relationships is a powerful aspect of the gradient analysis. The next two observations are the elemental sets identified from the composition-property correlation analysis and are commensurate with observations from the anecdotal examples in Figs. 3 and 4, that Mo and W similarly increase conductivity and Dy, Gd, and Tb similarly decrease the monoclinic distortion, with both phenomena leading to improvements in P. Of the 13 observations related to composition-structure-property relationships, 6 of them (#4–9) are explained by changes to the bending and stretching modes of m-BiVO4 due to decreasing monoclinic distortion, which occurs with RE alloying and to an even greater extent in the RE-TM co-alloying spaces. Table 1 An example report of observations related to further materials optimization (1), composition-property relationships (2–3), and composition-structure-property relationships (4–16) From the perspective of knowledge discovery, it is insightful to further inspect how the observations of Table 1 relate to those in the literature. We note that this system was chosen to validate the algorithms of the present work due to its years of research precedence and publication history that are imperative for establishing a set of ground truth observations against which the automatically-generated observations can be compared. In this regard, observations 1–6 and 8–9 are commensurate with the results of ref. 29 with an important caveat that custom, not-broadly-applicable algorithms were developed to identify these relationships in that work. The automated gradient analysis also extends the observations of that work in two critical aspects, by quantifying the relationships and by identifying that Tb and Dy alloying elements follow the same relationships as Gd. Line 7 and 10–11 are observations related to the same underlying phenomenon but were not identified by the previous analysis and are thus discovered and quantified by the gradient analysis. Line 12 involves a spectral feature identifiable by previous Raman literature44 for Mo but with no literature precedent for W, and identification and quantification of its relationship to photoelectrochemical performance is new to the present work. Lines 13–16 are also new to the present work and involve spectral features which have yet to be identified. Due to the size and dimensionality of the dataset, observation such as the similarity of the REs may be identifiable by manual inspection of the date, but quantification of the similar effect of RE alloying at all points in the high-dimensional space is uniquely enabled by the gradient analysis, and the automated identification of the composition-structure-property relationships are not forthcoming from manual human analysis. While this anecdotal example of automated identification of key data relationships demonstrates that this analysis would have greatly accelerated the understanding of the fundamental materials science in this class of photoanodes, it is important to note limitations on the generality of the present techniques and of machine learning-based data interpretation. For the automated report generation (Table 1), we assert that there is considerable generality to the concept of analyzing CNN gradients to identify the data relationships that are critical for understanding the fundamental science, as described with the right-most column in Table 1, but the criteria for enumerating data observations (including threshold values and criteria noted above) were user-chosen in the present case and will likely need to be altered for analyzing other datasets. Other than excluding V from the gradient correlation analysis, we did not discuss methods for mitigating the influence of correlations in the set of materials (used for CNN training) in the gradient analysis. This issue is perhaps not critical for the present dataset because each alloy and co-alloy composition space was sampled with the same grid of compositions, but generalization of these techniques will require further inspection of how correlations in the source data impact CNN gradients.45,46 Finally, the CNN model has no concept of TM vs. RE classification of elements and did not "learn" anything about the chemistry of these elements, only that when it comes to alloying-based improvements to P, the TM and RE families of elements each have a characteristic data relationship whose identification enables the scientist to learn something fundamental about the underlying materials science. Consequently, this machine learning-based identification of key data relationships augments but does not replace human interpretation of scientific discoveries. To leverage the ability of CNNs to model complex data relationships in high-dimensional spaces, we trained a CNN model to predict photoelectrochemical performance of BiVO4-based photoanodes from the composition and Raman spectrum of 1379 photoanode samples containing various 3 and 4-cation combinations from a set of 7 elements. Gradients calculated from the CNN model, akin to partial derivates of the performance with respect to each input variable, enabled effective visualization of data trends at specific locations in the materials parameter space as well as collectively for the entire dataset. Automated analysis of the gradients provides guidance for research, including how to move beyond the confines of the present dataset to further improve performance. To accelerate generation of fundamental scientific understanding, correlations in the gradients are analyzed to identify the key data relationships whose interpretation by a human expert can provide comprehensive understanding of the composition-property and composition-structure-property relationships in the materials system. This approach to interpreting machine learning models accelerates scientific understanding and illustrates avenues for continued automation of scientific discovery. The details of the materials synthesis, photoelectrochemical measurements, and Raman measurements are described elsewhere.29,30 Briefly, two duplicate thin-film materials libraries were prepared by ink-jet printing using Bi, V, Mo, W, Tb, Gd, and Dy metal-nitrate inks on SnO2:F (FTO) coated glass. Each library was calcined at 565 °C in O2 gas for 30 min, after which one was used for Raman measurements and the other for photoelectrochemical measurements. The photoelectrochemical measurements included, for each material sample, a cyclic voltammogram (CV) using a Pt counter electrode and Ag/AgCl reference electrode in a 3-electrode cell setup. Aqueous electrolyte with potassium phosphate buffer (50 mM each of monobasic and dibasic phosphate) was used with 0.25 M sodium sulfate as a supporting electrolyte (pH 6.7). CVs were acquired for each sample on the ML at chopped illumination using a 455 nm light emitting diode (LED). Maximum photoelectrochemical power generation (P) is calculated as a figure-of-merit for photoanode performance from CV for each sample. Raman spectroscopy spectrum of each sample was acquired by averaging Raman spectra mapping of whole library with a resolution of 75 μm × 75 μm using Renishaw inVia Reflex. Composition of each sample was determined by the printed amount of ink-jet printer. Gradient analysis for visualization To analyze the CNN model, we used a visualization method similar to the previously reported method,36,37,38,51 and repeated the analysis eight times using randomly initialized models. The CNN model (f) is a function of input vector of spectrum \(\left( {{\vec{\boldsymbol x}}_{{\mathrm{spec}}}} \right)\) and composition \(\left( {{\vec{\boldsymbol x}}_{{\mathrm{comp}}}} \right)\) with output of power generation performance Ypredicted; $$\begin{array}{*{20}{l}} {Y_{{\mathrm{predicted}}}^{(n)}} \hfill & = \hfill & {f^{\left( n \right)}\left( {{\vec{\boldsymbol x}}} \right) = f^{(n)}({\vec{\boldsymbol x}}_{{\mathrm{comp}}},{\vec{\boldsymbol x}}_{{\mathrm{spec}}})} \hfill \\ {{\vec{\boldsymbol x}}} \hfill & = \hfill & {\left( {{\vec{\boldsymbol x}}_{{\mathrm{comp}}},{\vec{\boldsymbol x}}_{{\mathrm{spec}}}} \right) = (x_1,x_2,x_3, \cdots ,x_{\mathrm{M}})} \hfill \\ {{\vec{\boldsymbol x}}_{{\mathrm{comp}}}} \hfill & = \hfill & {(x_1,x_2,x_3, \cdots ,x_7)} \hfill \\ {{\vec{\boldsymbol x}}_{{\mathrm{spec}}}} \hfill & = \hfill & {(x_8,\,x_9,\,x_{10}, \cdots ,x_{1022})} \hfill \end{array}$$ where n indicates the n-th run of the analysis (n = 1..8), and M indicates the input vector dimension (=7 + 1015). The input dataset X is a matrix of each j-th input vector; $$\begin{array}{*{20}{l}} {X = \left( {X_1, \cdots ,X_N} \right)} \hfill & = \hfill & {\left( {\begin{array}{*{20}{c}} {X_{1,1}} & \cdots & {X_{1,N}} \\ \vdots & \ddots & \vdots \\ {X_{M,1}} & \cdots & {X_{M,N}} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} {X_{{\mathrm{comp}}}} \\ {X_{{\mathrm{spec}}}} \end{array}} \right)} \hfill \\ {X_{{\mathrm{comp}}}} \hfill & = \hfill & {\left( {\begin{array}{*{20}{c}} {X_{1,1}} & \cdots & {X_{1,N}} \\ \vdots & \ddots & \vdots \\ {X_{7,1}} & \cdots & {X_{7,N}} \end{array}} \right)} \hfill \\ {X_{{\mathrm{spec}}}} \hfill & = \hfill & {\left( {\begin{array}{*{20}{c}} {X_{8,1}} & \cdots & {X_{8,N}} \\ \vdots & \ddots & \vdots \\ {X_{1022,1}} & \cdots & {X_{1022,N}} \end{array}} \right)} \hfill \end{array}$$ where Xj indicate the inputs vector of j-th sample, and N indicates the total number of samples (=1379). We defined gradient matrix G as a partial derivative in output with respect to the input value in input vectors; $$G_{i,j}^{(n)} = \left. {\frac{{\partial f^{\left( n \right)}}}{{\partial x_i}}} \right|_{{\vec{\boldsymbol x}} = X_j}$$ where j indicates j-th sample and i indicates i-th value in input vector. Also, we calculated average and standard deviation of gradient form eight runs; $${G_{i,j}^{{\mathrm{ave}}}} = {\frac{1}{8}\mathop {\sum }\limits_{n = 1}^8 G_{i,j}^{\left( n \right)}}$$ $${G_{i,j}^{{\mathrm{std}}}} = {\sqrt {\frac{1}{8}\mathop {\sum }\limits_{n = 1}^{8} \left( {G_{i,j}^{(n)} - G_{i,j}^{{\mathrm{ave}}}} \right)^2} }$$ where Gave is averaged gradient of 8 models, and Gstd is standard deviation in 8 models. These gradients indicate how much impact the input value has on the output; if the gradient is positive, then the input value has positive influence, and if the gradient is negative, then the input value has negative influence on the output. The Pearson correlation coefficient matrix C is defined as follows; $$\begin{array}{*{20}{l}} C \hfill & = \hfill & {\frac{1}{8}\mathop {\sum }\limits_0^8 C^{(n)}} \hfill \\ {C_{ik}^{\left( n \right)}} \hfill & = \hfill & {{\mathrm{Pearson}}\left( {G_i^{\left( n \right)},G_k^{\left( n \right)}} \right) = \frac{{{\mathrm{cov}}\left( {G_i^{\left( n \right)},G_k^{\left( n \right)}} \right)}}{{\sigma _{G_i^{\left( n \right)}}\sigma _{G_k^{\left( n \right)}}}}} \hfill \\ {G_i} \hfill & = \hfill & {(G_{i1},G_{i2}, \cdots ,G_{iN})} \hfill \end{array}$$ where Gi is gradient vector with respect to i-th parameter in input vector, cov(Gi, Gk) is covariance of Gi and Gk, \(\sigma _{G_i}\) is standard deviation of Gi. G has 1022 (input vector dimension = 1015 + 7) × 1379 (sample number) dimension, and C has 1022 × 1022 dimension. CNN model CNN model was constructed in python using Keras package with Tensorflow backend, a schematic model description is shown in Fig. 2. There are two input vectors and one output value in this model; a spectrum input vector \({\vec{\boldsymbol x}}_{{\mathrm{spec}}}\), a composition input vector \({\vec{\boldsymbol x}}_{{\mathrm{comp}}}\), and output value Y. The spectrum input vector is 1015 dimensions-length with a range from 300 to 1400 cm−1 wavenumbers of each sample. Each spectrum is normalized by the main of the peak value at around 825 cm−1, which is attributed to V-O symmetric stretching vibration mode of BiVO4. The composition input vector is 7-length vector of atomic fraction of elements (Bi, V, Mo, W, Tb, Gd, and Dy), which has 0–1 value so that the sum of values in a vector equals unity (Bi + V + Mo + W + Tb + Gd + Dy = 1). The output value Y of this model is standardized maximum photoelectrochemical power generation P; Yi = (Pi − μ)/σ, i = 0,1,…N, where μ is mean value of P, σ is standard deviation of P, i indicates the i-th sample, and N is the total number of samples. These input vectors are fed into the first layers as shown in Fig. 2a, d. The first layer for the spectrum input vector is an input layer for following CNN layer (See Fig. 2a). This first layer has a dropout with dropout rate of 0.25 (not shown in Fig. 2). The second layer, Fig. 2c, is a CNN layer and the kernels of this layer is shown in Fig. 2b. The kernel size is 7 and the number of filters is 2. Exponential Linear Unit (ELU) is used as an activation function of this layer. This layer does not have any pooling layer. It is worth to mention that we found the prediction performance of the model without pooling layer is better than that with pooling layer, which is attributed to the peak position sensitiveness of the model without pooling layer. This layer also has a dropout with dropout rate of 0.25. The output of this layer is flattened and fed into the next concatenated layer, Fig. 2g. The first layer for composition input vector is an input layer for following neural network layer (See Fig. 2d). The next layer, Fig. 2e, is a neural network layer with 16 units, which activation function is ELU. This layer does not have dropout. The next layer, Fig. 2f has 16 units, and activation function is ELU. In next layer, Fig. 2g, the output of the CNN layer (Fig. 2c) is flattened and concatenated with the output of the composition layer (Fig. 2f), and this 2034-length output (1009 × 2 + 16) is fed into the following neural network layer, Fig. 2h, with 32 units and ELU activation. The output of this layer is then fed into the next neural network layer, Fig. 2i, with 32 units and ELU activation, followed by output layer, Fig. 2j, with one unit and linear activation, which predict the output value Y. The authors declare that the code used to perform the analysis is provided at https://github.com/johnmgregoire/CNN_Gradient_Analysis. The authors declare that the data supporting the findings of this study are available within the paper and its supplementary information files. Hinton, G. et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29, 82–97 (2012). Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet Classification with Deep Convolutional Neural Networks. In Proc. Advances In Neural Information Processing Systems 1097–1105 (Curran Associates/Red Hook, NY, USA, 2012). Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. https://arxiv.org/abs/1312.6034 (2014). Accessed 10 Apr 2015. Jurafsky, D. & Martin, J. H. Speech and Language Processing: An Introduction to Natural Language Processing. In Computational Linguistics and Speech Recognition (Pearson Education, London, 2000). Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016). Levinson, J. et al. Towards fully autonomous driving: Systems and algorithms. In Proc. IEEE Intelligent Vehicles Symposium (Curran Associates/Red Hook, NY, USA, 2011). Hautier, G., Fischer, C., Ehrlacher, V., Jain, A. & Ceder, G. Data mined ionic substitutions for the discovery of new compounds. Inorg. Chem. 50, 656–663 (2011). Xue, D. et al. Accelerated search for materials with targeted properties by adaptive design. Nat. Commun. 7, 11241 (2016). Welborn, M., Cheng, L. & Miller, T. F. Transferability in machine learning for electronic structure via the molecular orbital basis. J. Chem. Theory Comput. 14, 4772–4779 (2018). Lookman, T., Alexander, F. J. & Rajan, K. Information science for materials discovery and design. Springer Series in Materials Science. (Springer International Publishing, Switzerland, 2016). Bartók, A. P., Kondor, R. & Csányi, G. On representing chemical environments. Phys. Rev. B. 87, 1–16 (2013). Ward, L., Agrawal, A., Choudhary, A. & Wolverton, C. A general-purpose machine learning framework for predicting properties of inorganic materials. NPJ Comput. Mater. 2, 16028 (2016). Hattrick-Simpers, J. R., Choudhary, K. & Corgnale, C. A simple constrained machine learning model for predicting high-pressure-hydrogen-compressor materials. Mol. Syst. Des. Eng. 3, 509–517 (2018). Stanev, V. et al. Machine learning modeling of superconducting critical temperature. NPJ Comput. Mater. 4, 29 (2018). Nikolaev, P. et al. Autonomy in materials research: a case study in carbon nanotube growth. npj Comput. Mater. 2, 16031 (2016). Carleo, G. & Troyer, M. Solving the quantum many-body problem with artificial neural networks. Science 355, 602–606 (2017). Alberi, K. et al. The 2019 materials by design roadmap. J. Phys. D. Appl. Phys. 52, 013001 (2018). Hattrick-Simpers, J. R., Gregoire, J. M. & Kusne, A. G. Perspective: composition–structure–property mapping in high-throughput experiments: turning data into knowledge. APL Mater. 4, 53211 (2016). Rajan, K. Combinatorial materials sciences: experimental strategies for accelerated knowledge discovery. Annu. Rev. Mater. Res. 38, 299–322 (2008). Zhou, Q. et al. Learning atoms for materials discovery. Proc. Natl Acad. Sci. USA 115, E6411–E6417 (2018). Dorenbos, P. Systematic behaviour in trivalent lanthanide charge transfer energies. J. Phys. Condens. Matter 15, 8417–8434 (2003). Green, M. L., Takeuchi, I. & Hattrick-simpers, J. R. Applications of high throughput (combinatorial) methodologies to electronic, magnetic, optical, and energy-related materials. J. Appl. Phys. 113, 231101 (2013). Kusne, A. G., Keller, D., Anderson, A., Zaban, A. & Takeuchi, I. High-throughput determination of structural phase diagram and constituent phases using GRENDEL. Nanotechnology 26, 444002 (2015). Van Dover, R. B., Schneemeyer, L. F. & Fleming, R. M. Discovery of a useful thin-film dielectric using a composition-spread approach. Nature 392, 162–164 (1998). Wang, J. et al. Identification of a blue photoluminescent composite material from a combinatorial library. Science 279, 1712 (1998). Reddington, E., Sapienza, A., Gurau, B., Viswanathan, R. & Sarangapani, S. Combinatorial electrochemistry: a highly parallel, optical screening method for discovery of better electrocatalysts. Science 280, 1735–1737 (1998). Yan, Q. et al. Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment. Proc. Natl Acad. Sci. USA 114, 3040–3043 (2017). Suram, S. K. et al. Automated phase mapping with AgileFD and its application to light absorber discovery in the V-Mn-Nb oxide system. ACS Comb. Sci. 19, 37–46 (2017). Newhouse, P. F. et al. Combinatorial alloying improves bismuth vanadate photoanodes via reduced monoclinic distortion. Energy Environ. Sci. 11, 2444–2457 (2018). Newhouse, P. F. et al. Multi-modal optimization of bismuth vanadate photoanodes via combinatorial alloying and hydrogen processing. Chem. Commun. 55, 489–492 (2018). Ling, J. et al. Building data-driven models with microstructural images: generalization and interpretability. Mater. Discov. 10, 19–28 (2017). Ziatdinov, M., Maksov, A. & Kalinin, S. V. Learning surface molecular structures via machine vision. npj Comput. Mater. 3, 31 (2017). Ziatdinov, M. et al. Deep learning of atomically resolved scanning transmission electron microscopy images: chemical identification and tracking local transformations. ACS Nano 11, 12742–12752 (2017). Kondo, R., Yamakawa, S., Masuoka, Y., Tajima, S. & Asahi, R. Microstructure recognition using convolutional neural networks for prediction of ionic conductivity in ceramics. Acta Mater. 141, 29–38 (2017). Kajita, S., Ohba, N., Jinnouchi, R. & Asahi, R. A universal 3D voxel descriptor for solid-state material informatics with deep convolutional neural networks. Sci. Rep. 7, 1–9 (2017). Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: visualising image classification models and saliency maps. http://arxiv.org/abs/1312.6034 (2013). Accessed 19 Apr 2014. Zeiler, M. D. & Fergus, R. Visualizing and understanding convolutional networks. In Proc. European conference on computer vision 818–833 (Springer/Cham, Switzerland, 2014). Springenberg, J. T., Dosovitskiy, A., Brox, T. & Riedmiller, M. Striving for simplicity: the all convolutional net. http://arxiv.org/abs/1412.6806 (2014). Accessed 13 Apr 2015. Mascharka, D., Tran, P., Soklaski, R. & Majumdar, A. Transparency by design: closing the gap between performance and interpretability in visual reasoning. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition 4942–4950 (Curran Associates/Red Hook, NY, USA, 2018). Zhou, S.-M. & Gan, J. Q. Low-level interpretability and high-level interpretability: a unified view of data-driveninterpretable fuzzy system modelling. Fuzzy Sets Syst. 159, 3091–3131 (2008). Wachter, S., Mittelstadt, B. & Floridi, L. Transparent, explainable, and accountable AI for robotics. Sci. Robot. 2, eaan6080 (2017). Chollet, F. & others. Keras. https://keras.io (2015). Gutkowski, R. et al. Unraveling compositional effects on the light-induced oxygen evolution in Bi(V–Mo–X)O4 material libraries. Energy Environ. Sci. 10, 1213–1221 (2017). Zhou, D., Pang, L., Wang, H., Guo, J. & Randall, C. A. Phase transition, Raman spectra, infrared spectra, band gap and microwave dielectric properties of low temperature firing (Na0.5xBi1_0.5x)(MoxV1_x)O4 solid solution ceramics with scheelite structures. J. Mater. Chem. 21, 18412–18420 (2011). Ancona, M., Ceolini, E., Oztireli, C. & Gross, M. Towards better understanding of gradient-based attribution methods for Deep Neural Networks. In Proc. 6th International Conference on Learning Representations (ICLR, Zurich, 2018). Sundararajan, M., Taly, A. & Yan, Q. Axiomatic attribution for deep networks. https://arxiv.org/abs/1703.01365 (2017). Accessed 13 Jun 2017. Yao, W., Iwai, H. & Ye, J. Effects of molybdenum substitution on the photocatalytic behavior of BiVO 4. Dalt. Trans. 11, 1426–1430 (2008). Gotić, M., Musić, S., Ivanda, M., Šoufek, M. & Popović, S. Synthesis and characterisation of bismuth (III) vanadate. J. Mol. Struct. 744, 535–540 (2005). Hardcastle, F. D., Wachs, I. E., Eckert, H. & Jefferson, D. A. Vanadium (V) environments in bismuth vanadates: a structural investigation using Raman spectroscopy and solid state 51V NMR. J. Solid State Chem. 90, 194–210 (1991). Merupo, V. I., Velumani, S., Oza, G., Makowska-Janusik, M. & Kassiba, A. Structural, electronic and optical features of molybdenum-doped bismuth vanadium oxide. Mater. Sci. Semicond. Process. 31, 618–623 (2015). Chollet, F. How convolutional neural networks see the world. https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html (2016). Accessed 30 Jan 2016. This study is based upon work performed by the Joint Center for Artificial Photosynthesis, a DOE Energy Innovation Hub, supported through the Office of Science of the U.S. Department of Energy (Award No. DE-SC0004993). Development of the algorithm for automating the model interpretation (J.M.G. and H.S.S.) was funded by Toyota Research Institute through the Accelerated Materials Design and Discovery program. Joint Center for Artificial Photosynthesis, California Institute of Technology, Pasadena, CA, 91125, USA Mitsutaro Umehara, Helge S. Stein, Dan Guevarra, Paul F. Newhouse, David A. Boyd & John M. Gregoire Future Mobility Research Department, Toyota Research Institute of North America, Ann Arbor, MI, 48105, USA Mitsutaro Umehara Helge S. Stein Dan Guevarra Paul F. Newhouse David A. Boyd John M. Gregoire M.U. performed model training and gradient analysis. H.S.S. and D.G. assisted with design of the model and comparisons to other techniques. P.F.N., D.G. and D.A.B. performed all experiments. M.U., H.S.S., D.G. and J.M.G. interpreted model outputs and created data visualization schemes. J.M.G. created algorithm for automated relationship identification with assistance from M.U. and H.S.S. M.U., H.S.S. and J.M.G. were the primary authors of the manuscript. Correspondence to John M. Gregoire. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Complete dataset used for model training Umehara, M., Stein, H.S., Guevarra, D. et al. Analyzing machine learning models to accelerate generation of fundamental materials insights. npj Comput Mater 5, 34 (2019). https://doi.org/10.1038/s41524-019-0172-5 Two-step machine learning enables optimized nanoparticle synthesis Flore Mekki-Berrada Zekun Ren Xiaonan Wang npj Computational Materials (2021) Applications of Carbon Nanotubes in the Internet of Things Era Jinbo Pang Alicja Bachmatiuk Gianaurelio Cuniberti Nano-Micro Letters (2021) Interpretable machine-learning strategy for soft-magnetic property and thermal stability in Fe-based metallic glasses Zhichao Lu Xin Chen Zhaoping Lu Data-driven studies of magnetic two-dimensional materials Trevor David Rhone Efthimios Kaxiras Scientific Reports (2020) In-situ spatial and temporal electrical characterization of ZnO thin films deposited by atmospheric pressure chemical vapour deposition on flexible polymer substrates Alexander Jones Kissan Mistry Kevin P. Musselman npj Computational Materials (npj Comput Mater) ISSN 2057-3960 (online)
CommonCrawl
DOI:10.1088/1751-8113/49/42/425202 Solvability of a Lie algebra of vector fields implies their integrability by quadratures @article{Cariena2016SolvabilityOA, title={Solvability of a Lie algebra of vector fields implies their integrability by quadratures}, author={Jos{\'e} F. Cari{\~n}ena and Fernando Falceto and Janusz Grabowski}, journal={Journal of Physics A: Mathematical and Theoretical}, volume={49} J. Cariñena, F. Falceto, J. Grabowski Journal of Physics A: Mathematical and Theoretical We present a substantial generalisation of a classical result by Lie on integrability by quadratures. Namely, we prove that all vector fields in a finite-dimensional transitive and solvable Lie algebra of vector fields on a manifold can be integrated by quadratures. View on IOP Publishing arxiv.org Solvable Lie Algebras of Vector Fields and a Lie's Conjecture K. Grabowska, J. Grabowski We present a local and constructive differential geometric description of finite-dimensional solvable and transitive Lie algebras of vector fields. We show that it implies a Lie's conjecture for such… Integrable systems in cosymplectic geometry B. Jovanović, K. Lukić Motivated by the time-dependent Hamiltonian dynamics, we extend the notion of Arnold–Liouville and noncommutative integrability of Hamiltonian systems on symplectic manifolds to that on cosymplectic… Jacobi Multipliers in Integrability and the Inverse Problem of Mechanics J. Cariñena, J. Fernández-Núñez The general theory of the Jacobi last multipliers in geometric terms is reviewed and the theory is applied to different problems in integrability and the inverse problem for one-dimensional mechanical systems. Reduction and integrability: a geometric perspective J. Cariñena A geometric approach to integrability and reduction of dynamical system is developed from a modern perspective. The main ingredients in such analysis are the infinitesimal symmetries and the tensor… Darboux coordinates for Hamiltonian structures defined by Novikov algebras. I. Strachan The Gauss-Manin equations are solved for a class of flat-metrics defined by Novikov algebras, this generalizing a result of Balinskii and Novikov who solved this problem in the case of commutative… Non-commutative integrability, exact solvability and the Hamilton–Jacobi theory S. Grillo Analysis and Mathematical Physics The non-commutative integrability (NCI) is a property fulfilled by some Hamiltonian systems that ensures, among other things, the exact solvability of their corresponding equations of motion. The… Stratified Lie systems: theory and applications J. Cariñena, J. de Lucas, D. Wysocki A stratified Lie system is a nonautonomous system of first-order ordinary differential equations on a manifold M described by a t-dependent vector field X=∑α=1rgαXα , where X 1, …, X r are vector… Foliated Lie systems: Theory and applications. J. Cariñena, J. Lucas A $\mathcal{F}$- foliated Lie system is a first-order system of ordinary differential equations whose particular solutions are contained in the leaves of the foliation $\mathcal{F}$ and all… Quasi-Lie schemes for PDEs J. Cariñena, J. Grabowski, J. de Lucas International Journal of Geometric Methods in Modern Physics The theory of quasi-Lie systems, i.e. systems of first-order ordinary differential equations that can be related via a generalized flow to Lie systems, is extended to systems of partial differential… Transitive nilpotent Lie algebras of vector fields and their Tanaka prolongations K. Grabowska, J. Grabowski, Z. Ravanpak Transitive local Lie algebras of vector fields can be easily constructed from dilations of $\mathbb{R}^n$ associating with coordinates positive weights (give me a sequence of $n$ positive integers… Geometry of Lie integrability by quadratures J. Cariñena, F. Falceto, J. Grabowski, M. F. Rañada In this paper, we extend the Lie theory of integration by quadratures of systems of ordinary differential equations in two different ways. First, we consider a finite-dimensional Lie algebra of… View 3 excerpts, references background and methods The Euler-Jacobi-Lie integrability theorem V. Kozlov This paper addresses a class of problems associated with the conditions for exact integrability of systems of ordinary differential equations expressed in terms of the properties of tensor… Introduction to Lie Algebras and Representation Theory J. Humphreys Preface.- Basic Concepts.- Semisimple Lie Algebras.- Root Systems.- Isomorphism and Conjugacy Theorems.- Existence Theorem.- Representation Theory.- Chevalley Algebras and Groups.- References.-… Generalized Liouville method of integration of Hamiltonian systems A. S. Mishchenko, A. Fomenko In this paper we shall show that the equations of motion of a solid, and also Liouville's method of integration of Hamiltonian systems, appear in a natural manner when we study the geometry of level… Remarks on a Lie Theorem on the Integrability of Differential Equations in Closed Form is integrable by quadratures [1] (for details, see also [2, 3]). More precisely, all of its solutions can be found by "algebraic operations" (including inversion of functions) and "quadratures," that… View 4 excerpts, references results Remarks on nilpotent Lie algebras of vector fields. J. Grabowski In [1] a local description of analytic vector fields finitely generating a transitive nilpotent Lie algebra L on a manifold is given. Our aim is to generalize this result by (i) omitting the… An extension of a theorem of Nagano on transitive Lie algebras H. Sussmann Let M be a real analytic manifold, and let L be a transitive Lie algebra of real analytic vector fields on M. A concept of completeness is introduced for such Lie algebras. Roughly speaking, L is… On the structure of transitively differential algebras G. Post We study finite-dimensional Lie algebras of polynomial vector fields in $n$ variables that contain the vector fields ${\partial}/{\partial x_i} \; (i=1,\ldots, n)$ and $x_1{\partial}/{\partial x_1}+… Mathematical Methods of Classical Mechanics V. Arnold Part 1 Newtonian mechanics: experimental facts investigation of the equations of motion. Part 2 Lagrangian mechanics: variational principles Lagrangian mechanics on manifolds oscillations rigid… W. Ledermann
CommonCrawl
EPJ Data Science Win-stay lose-shift strategy in formation changes in football Kohei Tamura1,2 & Naoki Masuda3 EPJ Data Science volume 4, Article number: 9 (2015) Cite this article Managerial decision making is likely to be a dominant determinant of performance of teams in team sports. Here we use Japanese and German football data to investigate correlates between temporal patterns of formation changes across matches and match results. We found that individual teams and managers both showed win-stay lose-shift behavior, a type of reinforcement learning. In other words, they tended to stick to the current formation after a win and switch to a different formation after a loss. In addition, formation changes did not statistically improve the results of succeeding matches. The results indicate that a swift implementation of a new formation in the win-stay lose-shift manner may not be a successful managerial rule of thumb. Exploring rules governing decision making has been fascinating various fields of research, and its domain of implication ranges from our daily lives to corporate and governmental scenes. In economic contexts in a widest sense, individuals often modify their behavior based on their past experiences, attempting to enhance the benefit received in the future. Such decision making strategies are generally called reinforcement learning. In reinforcement learning, behavior that has led to a large reward will be selected with a larger frequency, or the behavior will be incrementally modified toward the rewarded one. Reinforcement learning is common in humans [1, 2] and non-humans [3], is implemented with various algorithms [4], has theoretical underpinnings [1, 4], and has neural substrates [5, 6]. A simple version of reinforcement learning is the so-called win-stay lose-shift (WSLS) strategy [7, 8]. An agent adopting this strategy sticks to the current behavior if the agent is satisfied. The agent changes its behavior if unsatisfied. Experimental studies employing human participants have provided a line of evidence in favor of WSLS in situations such as repeated Prisoner's Dilemma [9, 10], gambling tasks [11, 12], and tasks in which participants construct virtual stone tools [13–15]. It has also been suggested in nonscientific contexts that decisions by athletes and gamblers are often consistent with WSLS patterns even if the outcome of games seems to be independent of the decision [16]. Association football (also known as soccer; hereafter refer to it as football) is one of the most popular sports in the world and provides huge business opportunities. The television rights of the English Premier League yield over two billion euros per year [17]. Transfer fees of top players can be tens of millions of euros [18]. Various aspects of football, not only watching but also betting [19] and the history of tactics [20], enjoy popularity. Football and other team sports also provide data for leadership studies because a large amount of sports data is available and the performance of teams and players can be unambiguously measured by match results [21–23]. In the present study, using data obtained from football matches, we examine the possibility that managers of teams use the WSLS strategy. Managers can affect the performance of teams through selections of players, training of players, and implementation of tactics including formations [18]. In particular, a formation is a part of tactics to determine how players participate in offense and defense [24] and considered to affect match results [24, 25]. Managerial decision making in substituting players during a match may affect the probability of winning [25]. We hypothesize that a manager continues to use the same formation if he has won the previous match, whereas he experiments on another formation following a loss in the previous match. The WSLS and more general reinforcement learning posit that unsuccessful individuals modify their behavior to increase the probability of winning. Therefore, we are interested in whether a formation change improves the performance of a team. To clarify this point, we also investigate effects of formation changes on the results of succeeding matches. We collected data on football matches from two websites, J-League Data Site (officially, "J. League") [26] on J-League, and Kicker-online [27] on Bundesliga. J-League and Bundesliga are the most prestigious professional football leagues in Japan and Germany, respectively. We refer to the two data sets as the J-League and the Bundesliga data sets. The two data sets contain, for each team and match, the season, date, manager's name, result (i.e., win, draw, or loss), and starting formation. Basic statistics of the data sets are summarized in Table 1. The distributions of the probability of winning for teams and managers are shown in Figure 1 for the two data sets. Because the strength of a team apart from the manager was considered to affect the probability of winning, in Figure 1, we treated a manager as different data points when he directed different teams. The same caveat applies to all the following analysis focusing on individual managers (Figures 3-6). Distributions of the probability of winning. (a) Distribution for the teams in the J-League data. (b) Distribution for the managers in the J-League data. (c) Distribution for the teams in the Bundesliga data. (d) Distribution for the managers in the Bundesliga data. The colored bars correspond to the teams or managers that have played at least 100 matches. Table 1 Statistics of the J-League and Bundesliga data sets Between 1993 and 2004, except for 1996, each season of J-League was divided into two half seasons. After the two half seasons had been completed, two champion teams, each representing a half season, played play-off matches. We regarded each half season as a season because intervals between two half seasons ranged from ten days to two months and therefore are longer than one week, which was a typical interval between two matches within a season. We also carried out the same analysis when we regarded one year, not one half season, as a season and verified that the main results were unaltered (Appendix A). We also collected data on Bundesliga from another website, Fussballdaten [28]. We focused on the Kicker-online data rather than the Fullballdaten data because the definition of the position was coarser for the Fussballdaten data (i.e., a player was not assumed to change his position during a season) than the Kicker-online data. Nevertheless, to verify the robustness of the following results, we also analyzed the Fussballdaten data (Appendix B). Definition of formation The definition of formation was different between the two data sets. In the J-League data, each of the ten field players was assigned to either defender (DF), midfielder (MF), or forward (FW) in each match. We defined formation as a triplet of the numbers of DF, MF, and FW players, which sum up to ten. For example, a formation 4-4-2 implies four DFs, four MFs, and two FWs. In the Bundesliga data, the starting positions of the players were given on a two-dimensional map of the pitch (Figure 2). For this data set, we defined formation as follows. First, we measured the distance between the goal line and the bottom edge of the image representing each player along the vertical axis (e.g., 113 shown in Figure 2). We referred to the HTML source code of Kicker-online to do this. The unit of the distance is pixel (px). The distance between the goal line and the half-way line is between 45 and 60 m in real fields. The same distance is approximately equal to 500 px in Kicker-online. Therefore, 1 px in Kicker-online roughly corresponds to 10 cm in real fields. Although the HTML source code also included the distance of players from the left touch line, we neglected this information because the primary determinant of the player's position seems to be the distance from the goal line rather than that from the left or right touch line, as implied by the terms DF, MF, and FW. Second, we grouped players whose distances from the goal line were the same. Third, we ordered the groups of players in terms of the distance, resulting in an ordered set of the numbers of players at each distance value. The set of numbers defined a formation. For example, when the distances of the ten field players are equal to 113 px, 113 px, 113 px, 113 px, 236 px, 236 px, 359 px, 359 px, 359 px, and 441 px, the formation of the team is defined to be 4-2-3-1 (Figure 2). Definition of formation in the Bundesliga data. Kicker-online gives the starting positions of the players as two-dimensional coordinates on the pitch. Field players with the identical distance from the goal line are aggregated into the same position. The starting positions shown in the figure are coded as 4-2-3-1. Among all matches in the Bundesliga data, the smallest nonzero distance between two players was equal to 31 px. Therefore, we did not have to worry about the possibility that players possessed almost the same distance values while being classified into distinct positions. For example, there was no case in which the distances of two field players from the goal line were equal to 113 px and 114 px. For both data sets, a formation was defined as an ordered set of numbers, whereas the definition differs for the two data sets. For example, forward players possessing distance values 359 and 441 were classified into different positions in the Bundesliga data, whereas they belonged to the same position in the J-League data if they were both assigned to FW. In the following, we regarded that formation was changed when the ordered set of numbers differed between two consecutive matches. Figures 3(a)-(d) show the distribution of the probability that a team or manager has changed the formation in the two leagues. To calculate the probability of formation changes for a team, we excluded the first match in each season and the matches immediately after a change in the manager. As in the case of formation changes, we regarded that a manager was changed when the manager directed a team in a given match but did not do so in the next match. With this definition, a short absence of a chief manager due to illness, for example, may induce formation changes. However, we adhered to this definition because of the lack of further information on behavior of managers. In addition, as explained in Section 2.1, we treated a manager as different data points when he led different teams. Distributions of the probability of formation changes. (a) Distribution for the teams in the J-League data. (b) Distribution for the managers in the J-League data. (c) Distribution for the teams in the Bundesliga data. (d) Distribution for the managers in the Bundesliga data. In (a)-(d), the colored bars correspond to the teams or managers that have played at least 100 matches. (e) Probability of formation change in each season in J-League, aggregated over the different teams and managers. A circle represents a season. Because the J-League data set did not have the information on managers between 1993 and 1998, we neglected changes of managers between 1993 and 1998. Between 1993 and 2004, except for 1996, intervals between two circles are dense because a season consists of two half seasons. (f) Probability of formation change in each season in Bundesliga, aggregated over the different teams and managers. The frequency of formation changes as a function of time is shown in Figures 3(e) and 3(f) for J-League and Bundesliga, respectively. The figures suggest that the frequency of formation change is stable over years in J-League, but not in Bundesliga. Finally, we measured burstiness and memory coefficient [29] for interevent times of formation changes to quantify temporal patterns of formation changes. The results are shown in Appendix C. GLMM To statistically examine whether patterns of formation changes were consistent with WSLS behavior, we investigated effects of previous matches and other factors on the likelihood of formation change for each team. If managers used the WSLS, the effect of the win and loss in the previous match on the likelihood of formation change should be significantly negative and positive, respectively. We used a generalized linear mixed model (GLMM) with binomial errors and a logit-link function. The dependent variable was the occurrence or lack thereof of formation changes, which was binary. As independent variables, we included the binary variable representing whether or not the stadium was the home of the team (i.e., home or away) and the ternary result of the previous match (i.e., win, draw, or loss). We designated the draw as the reference category for the match result. Because the likelihood of formation changes may be affected by a streak of wins or losses, we also included the result of the second last match as an independent variable. The difference between the focal team's strength and the opponent's strength was also an independent variable. The strength of a team was defined by the probability of winning in the season. We estimated the strength of a team separately for each season because it can vary across seasons. The name of the manager was included as a random effect (random intercept). In this and the following analysis, we excluded the first match in each season for each team because we considered that the result of the last match in the preceding season would not directly affect the first match in a new season. In addition, we excluded matches immediately after a change of manager because we were not interested in formation changes induced by a change of manager. We further excluded the second match in each season for each team from the GLMM analysis when we employed the result of the second last match as an independent variable. Because the J-League data set did not have the information on managers between 1993 and 1998, we only used data between 1999 and 2014 in the GLMM analysis. We performed the statistical analysis using R 3.1.2 [30] with lme4 package [31]. Ordered probit model We also investigated the effects of formation changes on match results. We used the ordered probit model because a match result was ternary. Because the strength was considered to heavily depend on teams, we controlled for the strength of teams. The same model was used for fitting match results in football in the Netherlands [32] and the UK [18]. The dependent variable of the model was a match result. We assumed that the occurrence of formation change (change or no change), the stadium (home or away), the strength of teams, and the result of the previous match (win, draw, or loss) can affect a match result. As a linear combination of these factors, we defined the unobserved potential variable for team i, denoted by \(\alpha_{i}\), by $$ \alpha_{i} = \beta_{\mathrm{f}}f_{i} + \beta_{\mathrm{h}}h_{i}+\beta_{\mathrm{w}}w_{i} + \beta_{\ell}\ell_{i} +\beta_{\mathrm{ r}}r_{i}, $$ where \(f_{i}=1\) if team i changed the formation, and \(f_{i}=0\) otherwise; \(h_{i}=1\) if the stadium was the home of team i, and \(h_{i}=0\) otherwise; \(w_{i} = 1\) if team i won the previous match, and \(w_{i}=0\) otherwise; \(\ell_{i} = 1\) if team i lost the previous match, and \(\ell_{i}=0\) otherwise; the strength of team i denoted by \(r_{i}\) was defined as the fraction of matches that team i won in the given season. In Appendix D, we conducted the analysis by assuming that \(r_{i}\) was a latent variable obeying the normal distribution and then using the hierarchical Bayesian model [33]. Consider a match between home team i and away team j. We assumed that the match result, denoted by \(k_{ij}\), was determined by the difference between the potential values of the two teams, i.e., $$ y_{ij} \equiv\alpha_{i} - \alpha_{j}. $$ Variables \(y_{ij}\) and \(k_{ij}\) are related by $$ k_{ij} = \textstyle\begin{cases} 2 \mbox{ (home team wins)} & \mbox{if } c_{1}< y_{ij} + \epsilon _{ij}, \\ 1 \mbox{ (draw)} & \mbox{if } c_{0}< y_{ij} + \epsilon_{ij} < c_{1}, \\ 0 \mbox{ (home team loses)} & \mbox{if } y_{ij} + \epsilon_{ij}< c_{0}, \end{cases} $$ where \(c_{0}\) and \(c_{1}\) are threshold parameters, and \(\epsilon_{ij}\) is an error term that obeys the normal distribution with mean 0 and standard deviation 1. Because \(h_{i}-h_{j}=1\), \(\beta_{\mathrm{h}}\) appears as a constant term on the right-hand side of Eq. (2). In fact, it is impossible to estimate \(\beta_{\mathrm{h}}\) because \(\beta _{\mathrm{h}}\) effectively shifts \(c_{0}\) and \(c_{1}\) by the same amount such that there are only two degrees of freedom in the parameter space spanned by \(c_{0}\), \(c_{1}\), and \(\beta_{\mathrm{h}}\). Therefore, we assumed \(c_{0}=-c_{1}\) and estimated \(c_{0}\) and \(\beta _{\mathrm{h}}\). This assumption did not alter the estimates of the other parameters. Equation (3) results in $$\begin{aligned}& P(k_{ij}=2) = 1-\Phi(c_{1}-y_{ij}), \end{aligned}$$ $$\begin{aligned}& P(k_{ij}=1) = \Phi(c_{1}-y_{ij}) - \Phi(c_{0}-y_{ij}), \end{aligned}$$ $$\begin{aligned}& P(k_{ij}=0) = \Phi(c_{0}-y_{ij}), \end{aligned}$$ where P denotes the probability, and \(\Phi(\cdot)\) is the cumulative standard normal distribution function. We excluded the matches that were the first game in a season at least for either team. We also excluded matches immediately after a change of manager in either team. Because the J-League data set did not contain the information on managers between 1993 and 1998, we only used data between 1999 and 2014 in this analysis. We performed the analysis using R 3.1.2 [30] and maxLik package [34]. Influence of individual manager's behavior on match results Different managers may show WSLS behavior to different extents to respectively affect match results. Therefore, we analyzed data separately for individual managers. For each manager i, we calculated the probability of winning under each of the following four conditions: (i) i's team won the previous match, and i changed the formation, (ii) i's team won the previous match, and i did not change the formation, (iii) i's team lost the previous match, and i changed the formation, and (iv) i's team lost the previous match, and i did not change the formation. We then compared the probability of winning between cases (i) and (ii), and between cases (iii) and (iv) using the paired t-test. In the t-test, we included the managers who directed at least ten pairs of consecutive matches in both of the two cases in comparison. In this and the next sections, we treated a manager as different data points when he directed different teams, as explained in Section 2.1. In addition, we excluded the pairs of consecutive matches when the managers changed the team between the two matches. Degree of win-stay lose-shift To further examine possible relationships between manager's behavior and match results, we looked at the relationships between the tendency of the WSLS behavior for each manager (degree of WSLS for short) and the probability of winning. The degree of WSLS is defined by $$\begin{aligned}& \mbox{degree of WSLS} \\& \quad= \bigl|P(\mbox{change}|\mbox{win}) - P_{\mathrm{WSLS}}(\mbox{change}|\mbox{win}) \bigr|+ \bigl|P(\mbox{change}|\mbox{loss})- P_{\mathrm{WSLS}}(\mbox{change}|\mbox{loss}) \bigr| \\& \quad= \bigl|P(\mbox{change}|\mbox{win}) - 0 \bigr|+ \bigl|P(\mbox{change}|\mbox{loss}) - 1 \bigr| \\& \quad= P(\mbox{change}|\mbox{win})+1-P(\mbox{change}|\mbox{loss}), \end{aligned}$$ where \(P_{\mathrm{WSLS}}(\mbox{change}|\mbox{win})\) (=0) is the conditional probability that a perfect WSLS manager changes the formation after winning, and likewise for \(P_{\mathrm{WSLS}}(\mbox{change}|\mbox{loss})\) (=1). The degree of WSLS ranges from 0 to 2. Win-stay lose-shift behavior in formation changes We examined the extent to which managers possibly changed the formation of the team after losing a match and persist to the current formation after a win. The results of the GLMM analysis with the results of the previous matches being the only independent variables are shown in Table 2. For both data sets, winning in a match significantly decreased the probability of formation change in the next match, and losing in a match increased the probability of formation change. The results did not essentially change when we used the full set of independent variables (Table 3). Formation changes are consistent with WSLS patterns. Table 2 Results of the GLMM analysis when the results of the previous match were used as the sole independent variables Table 3 Results of the GLMM analysis when all the independent variables were considered For the J-League data, the effects of all the additional independent variables were insignificant. We analyzed the J-League data by regarding a pair of half seasons (i.e., an yearly season) as a season to confirm that the results remained qualitatively the same except that winning in the second last match also significantly decreased the probability of formation change (Appendix A). We also confirmed that matches played in the further past affect the probability of formation change to progressively small extents (Appendix E). For the Bundesliga data, winning in the second last match also significantly decreased the probability of formation change in the extended GLMM model (Table 3). These results are consistent with WSLS behavior. We also found for the Bundesliga data that stronger teams less frequently changed the formation and that a team would not change the formation to fight home games. We also investigated the Fussballdaten data for Bundesliga, in which the definition of formation was different, and confirmed that managers tended to use the WSLS strategy (Appendix B). Determinants of match results The results obtained from the ordered probit model are shown in Table 4. For both data sets, formation changes did not significantly affect a match result. The result remained qualitatively the same when each pair of half seasons was considered as a season in the J-League data (Appendix A), and when the strength of team was assumed to be a latent variable in the ordered probit model (Appendix D). However, when the Fussballdaten data were used, formation changes significantly decreased the probability of winning (Appendix B). Table 4 also tells us the following. Trivially, stronger teams were more likely to win in both data sets. The home advantage was significant in both data sets, consistent with previous literature [18, 35]. In Bundesliga, a win tended to yield a poor result in the next match. This is consistent with negative persistence effects reported in previous literature [18], i.e., the results of the current and previous matches tend to be the opposite. Table 4 Effects of variables on match results as obtained from the ordered probit model Figure 4(a) shows the probability of winning after individual managers changed or did not change the formation after a win in the J-League data. A large circle in Figure 4 represents a manager who presented both types of actions (i.e., formation change after winning and no formation change after winning) at least ten times. A small circle represents a manager who presented either type of action less than ten times. The formation change does not appear to affect the probability of winning. This is also apparently the case for the actions after a loss (Figure 4(b)) and the Bundesliga data (Figures 4(c) and 4(d)). The results also appear to be insensitive to the unconditional probability of winning, which roughly corresponds to the position along the diagonal in Figure 4. To be quantitative, we conducted the paired t-test on the managers who submitted the two types of actions at least ten times in each case (managers shown by the large circles in Figure 4). For the J-League data, there was no significant effect of formation change on the probability of winning both after winning (\(p = 0.441\), \(n=3\); corresponding to Figure 4(a)) and losing (\(p = 0.404\), \(n=4\); Figure 4(b)). For the Bundesliga data, formation changes after winning significantly decreased the probability of winning in the next match (\(p = 0.026\), \(n=46\); Figure 4(c)), whereas there was no significant effect after losing (\(p = 0.533\), \(n=42\); Figure 4(d)). These results suggest that formation changes did not at least increase the possibility of winning. Conditional probability of winning for individual managers after they changed or did not change the formation. A circle represents a manager associated with a team, who directed the team in at least 100 matches. A large circle represents a manager who showed both types of the actions (e.g., formation change after a win and no formation change after a win in (a) and (c)) at least ten times. A small circle represents a manager who presented either type of action less than ten times. (a) When the team won the previous match in the J-League data. (b) When the team lost the previous match in the J-League data. (c) When the team won the previous match in the Bundesliga data. (d) When the team lost the previous match in the Bundesliga data. The analysis with the ordered probit model aggregated the data from all managers. Therefore, we examined the relationship between the degree of WSLS and the probability of winning for individual managers. The results are shown in Figure 5. A circle in Figure 5 represents a manager. We did not find a significant relationship between the usage of the WSLS and the probability of winning for both J-League (Pearson's \(r = 0.213\), \(p = 0.411\), \(n=17\)) and Bundesliga (\(r = 0.058\), \(p = 0.668\), \(n=58\)) data. Relationship between the degree of WSLS and the probability of winning. (a) J-League. (b) Bundesliga. A circle represents a manager associated with a team, who directed the team in at least 100 matches. We have provided evidence that football managers tend to stick to the current formation until the team loses, consistent with the WSLS strategy previously shown in laboratory experiments with social dilemma games [9, 10] and gambling tasks [11, 12]. Formation changes did not significantly affect (at least did not improve) a match result in most cases. This result seems to be odd because managers change formation to lead the team to a success. Generally speaking, when the environment in which an agent is located is fixed or exogenously changing, reinforcement learning usually improves the performance of the agent [4]. However, computational studies have suggested that it is not always the case when agents employing reinforcement learning are competing with each other, because the competing agents try to supersede each other [36–39]. The present finding that manager's WSLS behavior does not improve team's performance is consistent with these computational results. Empirical studies also suggest that humans obeying reinforcement learning does not improve the performance in complex environments. For example, players in the National Basketball Association were more likely to attempt 3 point shots after successful 3 point shots although their probability of success decreased for additional shots [40]. Also in nonscientific accounts, it has been suggested that humans engaged in sports and gambles often use the WSLS strategy even if outcome of games is determined merely at random [16]. We have provided quantitative evidence underlying these statements. Many sports fans possess the hot hand belief in match results, i.e., belief that a win or good performance persists [41]. However, empirical evidence supports that streaks of wins and those of losses are less likely to occur than under the independence assumption [41]. By analyzing patterns of matches in the top division of football in England, Dobson and Goddard suggested the existence of negative persistence effects, i.e., a team with consecutive wins tended to perform poorly in the next match and vice versa [18]. Their results are consistent with the present results; we observed the negative persistence effects, i.e., anticorrelation between the results of the previous and present matches. In the present study, we have neglected various factors that potentially affect the likelihood of formation change because our data sets did not contain the relevant information. For example, managers may change formations due to injuries, suspensions of players, and other strategic reasons including transfer of players. More detailed data will be able to provide further understanding of the relative importance of strategic versus accidental factors in formation changes. An important limitation of the present study is that we have oversimplified the concept of formation. Effective formations dynamically change during a match owing to movements of players. Because of the availability of data and our interests in the manager's long-term behavior rather than formation changes during a match [25], we used the formation data released in the beginning of the matches. Based on recent technological developments, formations can be extracted from tracking data on movement patterns of players [42, 43]. Investigations on manager's decision making using such technologies warrant further research. Fudenberg D, Levine DK (1998) The theory of learning in games. MIT Press, Cambridge Camerer C (2003) Behavioral game theory: experiments in strategic interaction. Princeton University Press, Princeton Pearce JM (2013) Animal learning and cognition: an introduction. Psychology Press, Hove Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Cambridge Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward. Science 275(5306):1593-1599 Glimcher PW, Camerer C, Fehr E, Poldrack RA (2009) Neuroeconomics: decision making and the brain. Academic Press, New York Kraines D, Kraines V (1989) Pavlov and the prisoner's dilemma. Theory Decis 26(1):47-79 Nowak M, Sigmund K (1993) A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner's Dilemma Game. Nature 364(6432):56-58 Wedekind C, Milinski M (1996) Human cooperation in the simultaneous and the alternating Prisoner's Dilemma: Pavlov versus Generous Tit-for-Tat. Proc Natl Acad Sci USA 93(7):2686-2689 Milinski M, Wedekind C (1998) Working memory constrains human cooperation in the Prisoner's Dilemma. Proc Natl Acad Sci 95(23):13755-13758 Hayden BY, Platt ML (2009) Gambling for Gatorade: risk-sensitive decision making for fluid rewards in humans. Anim Cogn 12(1):201-207 Scheibehenne B, Wilke A, Todd PM (2011) Expectations of clumpy resources influence predictions of sequential events. Evol Hum Behav 32(5):326-333 Mesoudi A, O'Brien MJ (2008) The cultural transmission of Great Basin projectile-point technology I: an experimental simulation. Am Antiq 73(1):3-28 Mesoudi A, O'Brien MJ (2008) The cultural transmission of Great Basin projectile-point technology II: an agent-based computer simulation. Am Antiq 73(4):627-644 Mesoudi A (2014) Experimental studies of modern human social and individual learning in an archaeological context: people behave adaptively, but within limits. In: Akazawa T, Ogihara N, Tanabe HC, Terashima H (eds) Dynamics of learning in Neanderthals and modern humans, vol 2. Springer, Heidelberg, pp 65-76 Vyse SA (2013) Believing in magic: the psychology of superstition, updated edn. Oxford University Press, Oxford Panja T (2013) Top soccer leagues get 25% rise in TV rights sales, report says. In: Bloomberg Business. http://www.bloomberg.com/news/articles/2013-11-11/top-soccer-leagues-get-25-rise-in-tv-rights-sales-report-says. Accessed 28 Feb 2015 Dobson S, Goddard J (2001) The economics of football. Cambridge University Press, Cambridge Dixon MJ, Coles SG (1997) Modelling association football scores and inefficiencies in the football betting market. J R Stat Soc, Ser C, Appl Stat 46(2):265-280 Wilson J (2013) Inverting the pyramid: the history of soccer tactics. Nation Books, New York Audas R, Dobson S, Goddard J (1997) Team performance and managerial change in the English Football League. Econ Aff 17(6):30-36 Dawson P, Dobson S, Gerrard B (2000) Estimating coaching efficiency in professional team sports: evidence from English association football. Scott J Polit Econ 47(4):399-421 Audas R, Dobson S, Goddard J (2002) The impact of managerial change on team performance in professional sports. J Econ Bus 54(6):633-650 Bangsbo J, Peitersen B (2000) Soccer systems & strategies. Human Kinetics, Champaign Hirotsu N, Wright MB (2006) Modeling tactical changes of formation in association football as a zero-sum game. J Quant Anal Sports 2(2):4 MathSciNet Google Scholar J-League Data site https://data.j-league.or.jp/SFTP01/. Accessed 4-6 June 2014 Kicker-online http://www.kicker.de/. Accessed 14-16 Aug 2014 Fussballdaten http://www.fussballdaten.de/. Accessed 28-31 July 2014 Goh KI, Barabási AL (2008) Burstiness and memory in complex systems. EPL 81:48002 R Core Team (2014) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. http://www.R-project.org/ Bates D, Maechler M, Bolker B, Walker S (2014) lme4: linear mixed-effects models using Eigen and S4. http://CRAN.R-project.org/package=lme4. R package version 1.1-7 Koning RH (2000) Balance in competition in Dutch soccer. J R Stat Soc, Ser D, Statist 49:419-431 Parent E, Rivot E (2012) Introduction to hierarchical Bayesian modeling for ecological data. CRC Press, Boca Raton Henningsen A, Toomet O (2011) maxLik: a package for maximum likelihood estimation in R. Comput Stat 26(3):443-458 Albert J, Koning RH (2010) Statistical thinking in sports. CRC Press, Boca Raton Taiji M, Ikegami T (1999) Dynamics of internal models in game players. Physica D 134(2):253-266 Macy MW, Flache A (2002) Learning dynamics in social dilemmas. Proc Natl Acad Sci USA 99(Suppl 3):7229-7236 Masuda N, Ohtsuki H (2009) A theoretical analysis of temporal difference learning in the iterated prisoner's dilemma game. Bull Math Biol 71(8):1818-1850 Masuda N, Nakamura M (2011) Numerical analysis of a reinforcement learning model with the dynamic aspiration level in the iterated prisoner's dilemma. J Theor Biol 278(1):55-62 Neiman T, Loewenstein Y (2011) Reinforcement learning in professional basketball players. Nat Commun 2:569 Bar-Eli M, Avugos S, Raab M (2006) Twenty years of "hot hand" research: review and critique. Psychol Sport Exerc 7(6):525-553 Bialkowski A, Lucey P, Carr P, Yue Y, Matthews I (2014) "Win at home and draw away": automatic formation analysis highlighting the differences in home and away team behaviors. In: MIT Sloan sports analytics conference (SSAC) Bialkowski A, Lucey P, Carr P, Yue Y, Sridharanand S et al. (2014) Large-scale analysis of soccer matches using spatiotemporal data. In: IEEE international conference on data mining (ICDM) Stan Development Team (2014) Rstan: the R interface to Stan, version 2.5.0. http://mc-stan.org/rstan.html This work is supported by JST, CREST. Department of Creative Informatics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan Kohei Tamura CREST, JST, 4-1-8, Honcho, Kawaguchi-shi, Saitama, 332-0012, Japan Department of Engineering Mathematics, University of Bristol, Woodland Road, Clifton, Bristol, BS8 1UB, UK Naoki Masuda Correspondence to Naoki Masuda. All authors designed the study. KT collected and analyzed the data. All authors wrote the manuscript. Appendix A: Analysis of the J-League data on the basis of yearly seasons From 1993 to 2004, except for 1996, each season of J-League, spanning a year, was subdivided into two half seasons. In the main text, we regarded each half season as a season. To examine the robustness of our results, we carried out the same analysis when we regarded one entire season (i.e., one year), not one half season, as a season. The results were qualitatively the same as those shown in the main text (Tables 5–7) except that winning in the second last match significantly decreased the probability of formation change. Table 5 Results of the GLMM analysis for the J-League data when a year was regarded as a season Table 6 Results of the GLMM analysis with all independent variables for the J-League data when a year was regarded as a season Table 7 Effects of variables on match results for the J-League data when a year was regarded as a season Appendix B: Fussballdaten We analyzed data on Bundesliga from another website, Fussballdaten [28]. In the Fussballdaten data, each field player was assigned to one of the three positions (i.e., DF, MF, or FW) registered for an entire season. We defined the formation by counting the number of each type of field player in the same manner as that for the J-League data set. First, to examine possible existence of WSLS behavior by managers, we applied the GLMM analysis to the Fussballdaten data. The results shown in Tables 8 and 9 are largely consistent with those for the Kicker-online data (Tables 1 and 2). In particular, winning and losing in the previous match significantly decreased and increased the probability of formation change in the next match, respectively, consistent with WSLS behavior. Table 8 Results of the GLMM analysis for the Fussballdaten data when the results of the previous match were used as the sole independent variables Table 9 Results of the GLMM analysis for the Fussballdaten data when all the independent variables were considered Second, we also investigated the effect of formation change and other factors on the match result using the ordered probit model. Table 10 indicates that formation changes have decreased the probability of winning. This result is not consistent with those for the two data sets shown in the main text. In addition, winning in the previous match decreased the probability of winning in the next match, indicating the presence of the negative persistence effect. This result is consistent with that for the Kicker-online data (Table 4). Table 10 Effects of variables on match results for the Fussballdaten data Appendix C: Burstiness and memory coefficient of interevent time series To capture temporal properties of formation changes, in this section we calculated burstiness, B, and memory coefficient, M, [29] on the basis of the interevent time series \(\{ \tau_{i} \}\) defined as follows. We calculated B and M for each manager. As in the main text, we treated a manager as different data points when he directed different teams. In the main text, we used interevent time series for individual seasons without concatenating different seasons. In this section, however, we use \(\{\tau^{k}_{i}\}\) obtained by concatenating all seasons. For a given manager, we denote by \(t_{0}, t_{1}, \ldots, t_{N}\) (\(2\le t_{0} < t_{1} < \cdots< t_{N}\)) the times when the manager changes the formation. The number of formation change summed over all seasons is equal to \(N+1\). We counted the time in terms of the number of match rather than the day to exclude effects of variable intervals between consecutive matches in terms of the real time. If \(t_{2} = 5\), for example, the manager changed the formation to play the fifth match, and it was the third change for the manager since the first match in the data set. Managers sometimes moved from one team to another or did not direct any team. Because formation changes occurring as a result of a manager's move or after a long absence were not considered to be strategic, we discarded the corresponding intervals. It should be noted that we did not mix interevent time series for a manager leading different teams. Then, a time series \(\{t_{i}\}\) for each manager was partitioned into \(N_{k}\) segments by either a manager's move or absence. We denoted by \(N_{k}+1\) and K the total number of formation changes in the kth segment and the total number of segments, respectively. It holds true that \(\sum_{k=1}^{K} (N_{k} + 1) = N+1\). We also denoted by \(t^{k}_{i}\) (\(0\le i\le N_{k}\)) the time of the ith formation change in the kth segment. The interevent time for formation changes was defined by \(\tau^{k}_{i} = t^{k}_{i}-t^{k}_{i-1}\) (\(1\le i\le N_{k}\)). It holds true that \(\sum_{k=1}^{K} \sum_{i=1}^{N_{k}}\tau^{k}_{i}=N+1-K\). In the following analysis, we used managers who directed at least 100 matches in a team and has \(N+1-K \ge10\). The burstiness is defined by $$ B=\frac{\sigma/m-1}{\sigma/m+1}=\frac{\sigma-m}{\sigma+m}, $$ where \(m=\sum_{k=1}^{K} \sum_{i=1}^{N_{k}} \tau^{k}_{i}/(N+1-K)\) and \(\sigma=\sqrt{ \sum_{k=1}^{K} \sum_{i=1}^{N_{k}} (\tau^{k}_{i} - m)^{2} /(N+1-K)}\) are the mean and standard deviation of interevent time, respectively. B ranges between −1 and 1. A large value of B indicates that a sequence of formation change events is bursty in the sense that the interevent time obeys a long-tailed distribution. The Poisson process yields the exponential distribution and hence yields \(B = 0\). The memory coefficient quantifies the correlation between two consecutive interevent times and is defined by $$ M=\frac{1}{N-K} \sum_{k=1}^{K} \sum _{i=1}^{N_{k}-1}\frac{(\tau _{i}^{k}-m_{1})(\tau_{i+1}^{k}-m_{2})}{\sigma_{1}\sigma_{2}}, $$ $$\begin{aligned}& m_{1}= \sum_{k=1}^{K} \sum _{i=1}^{N_{k}-1} \frac{\tau^{k}_{i}}{N-K}, \end{aligned}$$ $$\begin{aligned}& m_{2}= \sum_{k=1}^{K} \sum _{i=2}^{N_{k}}\frac{ \tau^{k}_{i}}{N-K}, \end{aligned}$$ $$\begin{aligned}& \sigma_{1}=\sqrt{\sum_{k=1}^{K} \sum_{i=1}^{N_{k}-1} \frac{(\tau^{k}_{i} - m_{1})^{2}}{N-K}}, \end{aligned}$$ $$ \sigma_{2}=\sqrt{\sum_{k=1}^{K} \sum_{i=2}^{N_{k}} \frac{(\tau^{k}_{i} - m_{2})^{2}}{N-K}}. $$ An uncorrelated sequence of interevent times yields \(M=0\). To examine the statistical significance of B value for each manager, we generated 103 sequences of interevent times from the exponential distribution whose mean was equal to that of the original data. Each synthesized sequence had the same length (i.e., N) as that of the original data. We calculated B for each synthesized sequence. We regarded that the value of B for the original data was significant if it was not included in the 95% confidential interval (CI) on the basis of the distribution generated by the 103 sequences corresponding to the Poisson process. We calculated the CI for M in the same manner except that we generated synthesized sequences by randomizing the original sequence of interevent times, instead of sampling sequences from the exponential distribution. Figure 6 shows histograms of burstiness, B, and memory coefficient, M, for the managers. The average values of B and M for interevent times of formation changes in the J-League data were equal to 0.145 and −0.137, respectively. Those for the Bundesliga data set were equal to 0.022 and −0.120, respectively. For both data sets, the average values of B were positive, and those of M were negative. The fraction of managers yielding significantly positive and negative B values were equal to 0.385 and 0, respectively, for the J-League data. Those for the Bundesliga data were equal to 0.372 and 0.244, respectively. These results suggest that in both data sets, a moderate fraction of managers changed formations in a bursty manner. In the Bundesliga data, however, some managers changed formations more regularly than expected from the Poisson process. The fractions of significantly positive and negative M values were equal to 0.077 and 0.077, respectively, for the J-League data. Those for the Bundesliga data were equal to 0.026 and 0.026, respectively. In both data sets, the fractions of managers with significant M values were small, indicating that two consecutive interevent times were uncorrelated for a majority of managers. Distributions of burstiness, B , and memory, M , across managers. (a) B for the J-League data. (b) M for the J-League data. (c) B for the Bundesliga data. (d) M for the Bundesliga data. The colored bars correspond to managers who have statistically significant values. Appendix D: Hierarchical Bayesian model In the main text, we used the fraction of matches that a team won in a season to define the strength of the team. In this section, we analyze a model in which the strength of a team is assumed to be a latent variable. We used the hierarchical Bayesian ordered probit model combined with the Markov Chain Monte Carlo (MCMC) method [33]. The model is the same as that used in the main text except for the derivation of the team strength. We assumed that the prior of the strength of team i in a season, denoted by \(r_{i}\), obeyed the normal distribution with mean 0 and variance \(\sigma^{2}\). The priors of \(\beta_{\mathrm{f}}\), \(\beta_{\mathrm{ h}}\), \(\beta_{\mathrm{ w}}\), and \(\beta_{\ell}\) obeyed the normal distribution with mean 0 and variance 102. The prior of \(\sigma^{2}\) obeyed the uniform distribution on \([0, 10^{4}]\). We conducted MCMC simulations for four independent chains starting from the same prior distributions. The total iterate per chain was set to 25,000, and the first 5,000 iterates were discarded as transient. The thinning interval was set to 20 iterates. A final coefficient was regarded to be significant if the 95% credible interval did not bracket zero. We excluded the matches that were the first game in a season at least for either team. We performed the analysis using R 3.1.2 [30] and RStan package [44]. Table 11 summarizes the results obtained from the Bayesian probit model. For both data sets, the credible interval of the coefficient representing the effect of the formation change brackets zero. Therefore, we conclude that formation changes have not affected the probability of winning. Table 11 Effects of variables on match results obtained from the hierarchical Bayesian ordered probit model Appendix E: Cross-correlation analysis To further investigate possible relationships between formation changes and match results, we measured the cross-correlation between the two. In this analysis, we did not exclude the first match in each season. We used the teams that played at least 100 matches. We set \(f_{i,t}=1\) if team i changes the formation in the tth match (\(2\le t\le T_{i}\)), where \(T_{i}\) is the number of matches played by team i, and \(f_{i,t}=0\) otherwise; \(w_{i,t} = 1\) if team i wins in the tth match, and \(w_{i,t}=0\) otherwise; \(\ell_{i,t} = 1\) if team i loses in the tth match, and \(\ell _{i,t}=0\) otherwise. We defined the cross-correlation between two time series \(\{x_{i,t}\}\) and \(\{y_{i,t}\}\) by $$ \rho(x,y,\tilde{\tau}) = \frac{\sum_{i=1}^{N_{\mathrm{ team}}}\sum _{t=2}^{T_{i}-\tilde{\tau}} (x_{i,t+\tilde{\tau}}-\bar{x})(y_{i,t}-\bar {y})}{\sqrt{\sum_{i=1}^{N_{\mathrm{team}}}\sum_{t=2}^{T_{i}-\tilde{\tau }} (x_{i,t+\tilde{\tau}}-\bar{x})^{2}}\sqrt{\sum_{i=1}^{N_{\mathrm{ team}}}\sum_{t=2}^{T_{i}-\tilde{\tau}} (y_{i,t}-\bar{y})^{2}}}, $$ where \(\bar{x}=(1/N_{\mathrm{ team}})\times\sum_{i=1}^{N_{\mathrm{ team}}}\sum _{t=2}^{T_{i}-\tilde{\tau}}x_{i,t+\tilde{\tau}}/(T_{i}-\tilde{\tau }-1)\), \(\bar{y}=(1/N_{\mathrm{ team}})\times\sum_{i=1}^{N_{\mathrm{ team}}}\sum _{t=2}^{T_{i}-\tilde{\tau}}y_{i,t}/(T_{i}-\tilde{\tau}-1)\), \(N_{\mathrm{ team}}\) is the number of teams, and \(\tilde{\tau}\) is the lag. We measured the cross-correlation between formation changes and wins by $$\begin{aligned}& \rho(f,w,\tilde{\tau}) \quad \mbox{if } \tilde{\tau} \geq0, \end{aligned}$$ $$\begin{aligned}& \rho(w,f,-\tilde{\tau}) \quad \mbox{if } \tilde{\tau} < 0. \end{aligned}$$ Replacing w by ℓ in Eqs. (15) and (16) defines the cross-correlation between formation changes and losses. To examine the statistical significance of the cross-correlation obtained from the original data, we generated 103 randomized sequences of formation changes as follows. For given team i and positive lag \(\tilde{\tau}\), we randomly shuffled the original sequence of formation changes, \(\{f_{i,2+\tilde {\tau}}, \ldots, f_{i,T_{i}} \}\), by assigning 1 (i.e., formation change) to each match with the equal probability such that the number of 1s in the synthesized sequence was the equal to that in the original sequence. We generated a randomized sequence for each team. Then, we measured the cross-correlation between the randomized sequences of formation changes and \(\{ w_{i,2}, \ldots, w_{i,T_{i}-\tilde {\tau}}\}\) or \(\{ \ell_{i,2}, \ldots, \ell_{i,T_{i}-\tilde{\tau}}\}\) using Eq. (15). We repeated this procedure 103 times to obtain 103 cross-correlation values. The cross-correlation for the original data was considered to be significant for a given \(\tilde{\tau}\) if it was not included within the 95% CI calculated on the basis of the 103 correlation coefficient values for the randomized samples. We also examined the statistical significance of the cross correlation for a negative lag on the basis of 103 cross-correlation values between randomized sequences of \(\{ f_{i,2}, \ldots, f_{i,T_{i}-\tilde {\tau}}\}\) and \(\{w_{i,2+\tilde{\tau}}, \ldots, w_{i,T_{i}} \}\) or \(\{ \ell_{i,2+\tilde{\tau}}, \ldots, \ell_{i,T_{i}} \}\) using Eq. (16). The cross-correlation measured for various lags is shown in Figure 7. The cross-correlation value was the largest in the absolute value at \(\tilde{\tau} = 1\). The effect of past matches on formation changes was mildly significant between \(\tilde{\tau} = 2\) and \(\tilde{\tau} \approx5\). The sign of the effect (i.e., positive or negative) was the same for different lag values, which complies with the concept of WSLS. When \(\tilde{\tau} \leq0\), the cross-correlation was insignificant or weakly significant even if \(\tilde{\tau} \approx0\). This result is suggestive of a causal relationship, i.e., a match result tends to cause a formation change. Cross-correlation between temporal patterns of formation changes and match results. Ranges between the dashed lines represent 95% CIs on the basis of the randomized sequences of formation changes. Cross-correlation between (a) formation changes and wins for the J-League data, (b) formation changes and losses for the J-League data, (c) formation changes and wins for the Bundesliga data, and (d) formation changes and losses for the Bundesliga data. Tamura, K., Masuda, N. Win-stay lose-shift strategy in formation changes in football. EPJ Data Sci. 4, 9 (2015). https://doi.org/10.1140/epjds/s13688-015-0045-1 DOI: https://doi.org/10.1140/epjds/s13688-015-0045-1
CommonCrawl
Fields in Algebra A commutative ring with unity is called a field if its non-zero elements possesses a multiple inverse. Thus a ring $$R$$ in which the elements of $$R$$ are different from $$O$$ form an abelian group under multiplication is a field. Hence, a set $$F$$ having at least two distinct elements together with two operations $$ + $$ and $$ \times $$ is said to form a field if the following axioms are satisfied: (F1): $$F$$ is closed under addition, i.e. $$\forall a,b \in F \Rightarrow a + b \in F$$. (F2): Associative Law holds in $$F$$, i.e. for all $$a,b,c \in F \Rightarrow \left( {a + b} \right) + c = a + \left( {b + c} \right)$$. (F3): Identity element with respect to addition exists in $$F$$, i.e. there exist $$0 \in F$$, such that $$a + 0 = 0 + a = a$$, for all $$a \in F$$. (F4): There exist inverses of every element of $$F$$, i.e. for all $$a \in F$$, there exists an element $$ – a \in F$$ such that $$a + \left( { – a} \right) = – a + a = 0$$. (F5): Commutative Law holds in $$F$$, i.e. for all $$a,b \in F \Rightarrow a + b = b + a$$. (F6): $$F$$ is closed under multiplication, i.e. $$\forall a,b \in F \Rightarrow a \cdot b \in F$$. (F7): Associative Law holds in $$F$$, i.e. for all $$a,b,c \in F \Rightarrow \left( {a \cdot b} \right) \cdot c = a \cdot \left( {b \cdot c} \right)$$. (F8): Identity element with respect to multiplication exists in $$F$$, i.e. there exists $$1 \in F$$, such that $$a \cdot 1 = 1 \cdot a = a$$, for all $$a \in F$$. (F9): There exist inverses of every element of $$F$$, i.e. for all $$a \in F$$ and $$a \ne 0$$, there exists an element $${a^{ – 1}} \in F$$ (multiplicative inverse) such that $$a \cdot {a^{ – 1}} = {a^{ – 1}} \cdot a = 1$$. (F10): Commutative Law holds in $$F$$, i.e. for all $$a,b \in F \Rightarrow a \cdot b = b \cdot a$$. (F11): Distributive laws of multiplication over addition for all $$a,b,c \in F$$ such that $$a \cdot \left( {b + c} \right) = a \cdot b + a \cdot c$$ and $$\left( {b + c} \right) \cdot a = b \cdot + c \cdot a$$. The above properties can be summarized as (1) $$\left( {F, + } \right)$$ is an abelian group. (2) $$\left( {F, \times } \right)$$ is a semi-abelian group and $$\left( {F – \left\{ 0 \right\}, \times } \right)$$ is an abelian group. (3) Multiplication is distributive over addition. ⇐ Euclidean Ring ⇒ Vector Space ⇒
CommonCrawl
The Dynamics of Opportunity in America The Dynamics of Opportunity in America pp 97-136 | Cite as The Changing Distribution of Educational Opportunities: 1993–2012 Bruce Baker Danielle Farrie David G. Sciarra First Online: 10 March 2016 Over the past several decades, many states have pursued substantive changes to their state school finance systems. Some reforms have been stimulated by judicial pressure resulting from state constitutional challenges and others have been initiated by legislatures. But despite gains in school funding equity and adequacy made over the past few decades, in recent years we have witnessed a substantial retreat from equity and adequacy. This chapter builds on the national school funding fairness report annually published by the Education Law Center. We track school funding fairness (the relative targeting of funding to districts serving economically disadvantaged children) for all states from 1993 to 2012. This chapter explores in greater depth the consequences of school funding levels, distributions, and changes in specific classroom resources provided in schools. We find that states and districts applying more effort—spending a greater share of their fiscal capacity on schools—generally spend more on schools, and that these higher spending levels translate into higher staffing levels and lower class sizes as well as more competitive teacher wages. School funding School finance Funding equity Funding fairness Class size Teacher compensation School quality Pay for performance School poverty Download chapter PDF Over the past several decades, many states have pursued substantive changes to their state school finance systems. Some reforms have been stimulated by judicial pressure resulting from state constitutional challenges and others have been initiated by legislatures. But despite gains in school funding equity and adequacy made over the past few decades, in recent years we have witnessed a substantial retreat from equity and adequacy, and retrenchment among state legislatures, governors, and federal officials across the political aisle, with many contending that the level and distribution of school funding are not primary factors in quality of education. This chapter builds on the national school funding fairness report annually published by the Education Law Center, in which we apply regression-based methods to national data on all local public school districts to characterize state school finance systems (Baker et al. 2014). Specifically, we evaluate whether those systems lead to consistent targeting of resources to districts serving higher concentrations of children from economically disadvantaged backgrounds. In this chapter we expand our analysis in two directions. First, our past three national reports have each been based on the most recent three available years of district level data on state and local revenues. In this chapter, we track school funding fairness (the relative targeting of funding to districts serving economically disadvantaged children) for all states from 1993 to 2012. This time period includes substantive changes to state school finance systems in several states, whether as a function of ongoing litigation or proactive legislative change. Further, this period runs through the recent economic downturn, in which several state school finance systems lost significant ground, both in level of overall funding and in fairness of distribution (Baker 2014). Thus we are able to evaluate the extent of backsliding and the partial rebound that has occurred. Second, this chapter explores in greater depth the consequences of school funding levels, distributions, and changes in specific classroom resources provided in schools. The majority of school spending is dedicated to staffing, with the primary spending tradeoff being the balance between employee salaries and the numbers of employees assigned. Competitive teacher wages and appropriate class sizes are important to the provision of equitable and adequate educational programs and services. The third edition of Is School Funding Fair included additional indicators related to (a) pupil-to-teacher ratios across higher and lower poverty districts and (b) the relative competitiveness of teacher wages statewide when compared with nonteachers at similar education level and age. In that report, we provided preliminary evidence that more equitable funding distributions with respect to poverty concentrations did indeed translate to more equitable distributions of pupil-to-teacher ratios. Further, states with higher funding levels tended to have, on average, more competitive teacher wages relative to other professions. In this chapter, we explore both of these additional measures during a 20-year time period, and we add measures of class size and variation in teacher wages across schools and districts using data from the National Center for Education Statistics (NCES) Schools and Staffing Survey. Specifically, we explore whether targeting of funding to higher poverty districts translates to reduction of class sizes and the number of students per teacher in higher poverty settings relative to lower poverty ones. We also explore whether targeting of funding to higher poverty settings leads to more competitive wages in those settings. A substantial body of research points to the need not merely for comparable wages, but substantial added compensation to support recruiting and retaining teachers in high-need settings. Conceptions of Equity, Equal Opportunity, and Adequacy Reforms across the nation to state school finance systems have been focused on simultaneously achieving equal educational opportunity and adequacy. While achieving and maintaining educational adequacy requires a school finance system that consistently and equitably meets a certain level of educational outcomes, it is important to maintain equal educational opportunity in those cases where funding falls below adequacy thresholds. That is, whatever the level of outcomes attained across a school system, it should be equally attainable regardless of where a child lives or attends school or his or her background. Conceptions of school finance equity and adequacy have evolved over the years. Presently, the central assumption is that state finance systems should be designed to provide children, regardless of where they live and attend school, with equal opportunity to achieve some constitutionally adequate level of outcomes (Baker and Green 2009a). Much is embedded in this statement and it is helpful to unpack it, one layer at a time. The main concerns of advocates, policy makers, academics, and state courts from the 1960s through the 1980s were to (a) reduce the overall variation in per-pupil spending across local public school districts; and (b) disrupt the extent to which that spending variation was related to differences in taxable property wealth across districts. That is, the goal was to achieve more equal dollar inputs—or nominal spending equity—coupled with fiscal neutrality—or reducing the correlation between local school resources and local property wealth. While modern goals of providing equal opportunity and achieving educational adequacy are more complex and loftier than mere spending equity or fiscal neutrality, achieving the more basic goals remains relevant and still elusive in many states. An alternative to nominal spending equity is to look at the real resources provided across children and school districts: the programs and services, staffing, materials, supplies and equipment, and educational facilities provided (Still, the emphasis is on equal provision of these inputs) (Baker and Green (2009b). Providing real resource equity may, in fact, require that per-pupil spending not be perfectly equal if, for example, resources such as similarly qualified teachers come at a higher price (competitive wage) in one region than in another. Real resource parity is more meaningful than mere dollar equity. Further, if one knows how the prices of real resources differ, one can better compare the value of the school dollar from one location to the next. Modern conceptions of equal educational opportunity and educational adequacy shift emphasis away from schooling inputs and onto schooling outcomes—and more specifically equal opportunity—to achieve some level of educational outcomes. References to broad outcome standards in the school finance context often emanate from the seven standards articulated in Rose v. Council for Better Education,1 a school funding adequacy case in 1989 in Kentucky that scholars consider the turning point in shifting the focus from equity to adequacy in school finance legal theory (Clune 1994). There are two separable but often integrated goals here—equal opportunity and educational adequacy. The first goal is achieved when all students are provided the real resources to have equal opportunities to achieve some common level of educational outcomes. Because children come to school with varied backgrounds and needs, striving for common goals requires moving beyond mere equitable provision of real resources. For example, children with disabilities and children with limited English language proficiency may require specialized resources (personnel), programs, materials, supplies, and equipment. Schools and districts serving larger shares of these children may require substantively more funding to provide these resources. Further, where poverty is highly concentrated, smaller class sizes and other resource-intensive interventions may be required to strive for those outcomes achieved by the state's average child. Meanwhile, conceptions of educational adequacy require that policy makers determine the desired level of outcome to be achieved. Essentially, adequacy conceptions attach a "level" of outcome expectation to the equal educational opportunity concept. Broad adequacy goals are often framed by judicial interpretation of state constitutions. It may well be that the outcomes achieved by the average child are deemed sufficient. But it may also be that the preferences of policy makers or a specific legal mandate are somewhat higher (or lower) than the outcomes achieved by the average child. The current buzz phrase is that schools should ensure that children are "college ready"2 One final distinction, pertaining to both equal educational opportunity and adequacy goals, is the distinction between striving to achieve equal or adequate outcomes versus providing the resources that yield equal opportunity for children, regardless of their backgrounds or where they live. Achieving equal outcomes is statistically unlikely at best, and of suspect policy relevance, given that perfect equality of outcomes requires leveling down (actual outcomes) as much as leveling up. A goal of school finance policy is to provide the resources to offset pre-existing inequalities that otherwise give one child a greater chance than another of achieving the desired outcome levels. Money and School Finance Reforms There is an increasing body of evidence that substantive and sustained state school finance reforms matter for improving both the level and distribution of short-term and long-run student outcomes. A few studies have attempted to tackle school finance reforms broadly, applying multistate analyses over time. Card and Payne (2002) found "evidence that equalization of spending levels leads to a narrowing of test score outcomes across family background groups" (Card and Payne 2002, 49). Most recently, Jackson et al. evaluated long-term outcomes of children exposed to court-ordered school finance reforms, finding that "a 10 % increase in per-pupil spending each year for all 12 years of public school leads to 0.27 more completed years of education, 7.25 % higher wages, and a 3.67 percentage-point reduction in the annual incidence of adult poverty; effects are much more pronounced for children from low-income families" (2015, 1). Numerous other researchers have explored the effects of specific state school finance reforms over time, applying a variety of statistical methods to evaluate how changes in the level and targeting of funding affect changes in outcomes achieved by students directly affected by those funding changes. Figlio (2004) says that the influence of state school finance reforms on student outcomes is perhaps better measured within states over time, explaining that national studies of the type attempted by Card and Payne confront problems of (a) the enormous diversity in the nature of state aid reform plans, and (b) the paucity of national level student performance data. Several such studies provide compelling evidence of the potential positive effects of school finance reforms. Studies of Michigan school finance reforms in the 1990s have shown positive effects on student performance in both the previously lowest spending districts3 and previously lower performing districts.4 Similarly, a study of Kansas school finance reforms in the 1990s, which also primarily involved a leveling up of low-spending districts, found that a 20 % increase in spending was associated with a 5 % increase in the likelihood of students going on to postsecondary education (Deke 2003). Three studies of Massachusetts school finance reforms from the 1990s find similar results. The first, by Thomas Downes and colleagues, found that the combination of funding and accountability reforms "has been successful in raising the achievement of students in the previously low-spending districts." (2009, 5) The second found that "increases in per-pupil spending led to significant increases in math, reading, science, and social studies test scores for 4th- and 8th-grade students."5 The most recent of the three, published in 2014 in the Journal of Education Finance, found that "changes in the state education aid following the education reform resulted in significantly higher student performance" (Nguyen-Hoang and Yinger 2014, 297). Such findings have been replicated in other states, including Vermont.6 Indeed, the role of money in improving student outcomes is often contested. Baker (2012) explains the evolution of assertions regarding the unimportance of money for improving student outcomes, pointing out that these assertions emanate in part from misrepresentations of the work of Coleman and colleagues in the 1960s, which found that school factors seemed less associated with student outcome differences than did family factors. This was not to suggest, however, that school factors were entirely unimportant, and more recent reanalyses of the Coleman data using more advanced statistical techniques than available at the time clarify the relevance of schooling resources (Konstantopoulos and Borman 2011; Borman and Dowling 2010). Eric Hanushek ushered in the modern-era "money doesn't matter" argument in a study in which he tallied studies reporting positive and negative correlations between spending measures and student outcome measures, proclaiming as his major finding: "There appears to be no strong or systematic relationship between school expenditures and student performance" (1986, 1162).7 Baker (2012) summarized reanalyses of the studies tallied by Hanushek, applying quality standards to determine study inclusion, and finding that more of the higher quality studies yielded positive findings with respect to the relationship between schooling resources and student outcomes (Baker 2012). While Hanushek's above characterization continues to permeate policy discourse over school funding—and is often used as evidence that "money doesn't matter"—it is critically important to understand that this statement is merely one of uncertainty about the direct correlation between spending measures and outcome measures based on studies prior to 1986. Neither this statement, nor the crude tally behind it, ever provided any basis for assuming with certainty that money doesn't matter. A separate body of literature challenges the assertion of the positive influence of state school finance reforms in general and court-ordered reforms in particular. Baker and Welner (2011) explain that much of this literature relies on anecdotal characterizations of lagging student outcome growth following court-ordered infusions of new funding. Hanushek (2009) provide one example of this anecdote-driven approach in a book chapter that seeks to prove that court-ordered school funding reforms in New Jersey, Wyoming, Kentucky, and Massachusetts resulted in few or no measurable improvements. However, these conclusions are based on little more than a series of descriptive graphs of student achievement on the National Assessment of Educational Progress (NAEP) in 1992 and 2007 and an undocumented assertion that, during that period, each of the four states infused substantial additional funds into public education, focused on low-income and minority students, in response to judicial orders. They assume that, in all other states that serve as a comparison, similar changes did not occur. Yet they validate neither assertion. Baker and Welner (2011) explain that Hanushek and Lindseth failed to measure whether substantive changes had occurred to the level or distribution of school funding as well as when and for how long. For example, Kentucky reforms had largely faded by the mid- to late 1990s, yet Hanushek and Lindseth measure postreform effects in 2007. Similarly, in New Jersey, infusions of funding occurred from 1998 to 2003 (or, arguably, 2005). But Hanushek and Lindseth's window includes 6 years on the front end where little change occurred. Further, funding was infused into approximately 30 specific New Jersey districts, but Hanushek and Lindseth (2009) explore overall changes to outcomes among low-income children and minorities using NAEP data, where some of the children tested attended the districts receiving additional support but many did not.8 Finally, Hanushek and Lindseth concede that Massachusetts did, in fact experience substantive achievement gains, but attribute those gains to changes in accountability policies rather than funding. In an equally problematic analysis, Neymotin (2010) set out to show that court-ordered infusions of funding in Kansas following Montoy v. Kansas led to no substantive improvements in student outcomes. However, Neymotin evaluated changes in school funding from 1997 to 2006 even though the key Supreme Court decision occurred in January 2005 and impacted funding starting in the 2005–2006 school year, the end point of Neymotin's outcome data (Baker and Welner 2011). Finally, Greene and Trivitt (2008) present a study in which they claim to show that court-ordered school finance reforms led to no substantive improvements in student outcomes. However, while those authors offer the conclusion that court-ordered funding increases had no effect, they test only whether the presence of a court order is associated with changes in outcomes; they never once measure whether substantive school finance reforms followed the court order (also see Neymotin 2010). To summarize, there exists no methodologically competent analyses yielding convincing evidence that significant and sustained funding increases provide no educational benefits, and relatively few do not show decisively positive effects (Baker and Welner 2011). On balance, it is safe to say that a sizable and growing body of rigorous empirical literature validates that state school finance reforms can have substantive, positive effects on student outcomes, including reductions in outcome disparities or increases in overall outcome levels (Baker and Welner 2011). Resources That Matter The premise that money matters for improving school quality is grounded in the assumption that having more money provides schools and districts the opportunity to improve the qualities and quantities of real resources. The primary resources involved in the production of schooling outcomes are human resources—the quantity and quality of teachers, administrators, support, and other staff in schools. Quantities of school staff are reflected in pupil-to-teacher ratios and average class sizes. Reduction of class sizes or reductions of overall pupil-to-staff ratios require additional staff, and thus additional money, assuming wages and benefits for additional staff remain constant. Quality of school staff depend in part on the compensation available to recruit and retain them—specifically salaries and benefits, in addition to working conditions. Notably, working conditions may be reflected in part through measures of workload, like average class sizes, as well as the composition of the student population. A substantial body of literature has accumulated to validate the conclusion that both teachers' overall and relative wages affect the quality of those who choose to enter the teaching profession, and whether they stay once they get in. For example, Murnane and Olsen (1989) found that salaries affect the decision to enter teaching and the duration of the teaching career, while Figlio (1997, 2002) and Ferguson (1991) concluded that higher salaries are associated with more qualified teachers. Loeb and Page (2000) tackled the specific issues of relative pay noted above. They showed that: Once we adjust for labor market factors, we estimate that raising teacher wages by 10 % reduces high school dropout rates by 3–4 %. Our findings suggest that previous studies have failed to produce robust estimates because they lack adequate controls for non-wage aspects of teaching and market differences in alternative occupational opportunities. In short, while salaries are not the only factor involved, they do affect the quality of the teaching workforce, which in turn affects student outcomes. Research on the flip side of this issue—evaluating spending constraints or reductions—reveals the potential harm to teaching quality that flows from leveling down or reducing spending. For example, Figlio and Rueben (2001) note that, "Using data from the National Center for Education Statistics we find that tax limits systematically reduce the average quality of education majors, as well as new public school teachers in states that have passed these limits." Salaries also play a potentially important role in improving the equity of student outcomes. While several studies show that higher salaries relative to labor market norms can draw higher quality candidates into teaching, the evidence also indicates that relative teacher salaries across schools and districts may influence the distribution of teaching quality. For example, Ondrich et al. (2008) "find that teachers in districts with higher salaries relative to non-teaching salaries in the same county are less likely to leave teaching and that a teacher is less likely to change districts when he or she teaches in a district near the top of the teacher salary distribution in that county." Others have argued that the dominant structure of teacher compensation, which ties salary growth to years of experience and degrees obtained, is problematic because of weak correlations with student achievement gains, creating inefficiencies that negate the relationship between school spending and quality (Hanushek 2011). Existing funds, they argue, instead could be used to compensate teachers according to (measures of) their effectiveness while dismissing high-cost "ineffective" teachers and replacing them with better ones, thus achieving better outcomes with the same or less money (Hanushek 2009). This argument depends on four large assumptions. First, adopting a pay-for-performance model, rather than a step-and-lane salary model, would dramatically improve performance at the same or less expense. Second, shedding the "bottom 5 % of teachers" according to statistical estimates of their "effectiveness" can lead to dramatic improvements at equal or lower expense. Third, it assumes there are sufficiently accurate measures of teaching effectiveness across settings and children. Finally, this argument ignores the initial sorting of teachers into schools where more marketable teachers head for more desirable settings. Existing studies of pay-for-performance compensation models fail to provide empirical support for this argument—either that these alternatives can substantially boost outcomes, or that they can do so at equal or lower total salary expense (Springer et al. 2011). Simulations purporting to validate the long-run benefits of deselecting "bad" teachers depend on the average pool of replacements lining up to take those jobs being substantively better than those who were let go (average replacing "bad"). Simulations promoting the benefits of "bad teacher" deselection assume this to be true, without empirical basis, and without consideration for potential labor market consequences of the deselection policy itself (Baker et al. 2013a). Finally, existing measures of teacher "effectiveness" fall well short of these demands (Ibid.). Most importantly, arguments about the structure of teacher compensation miss the bigger point—the average level of compensation matters with respect to the average quality of the teacher labor force. To whatever degree teacher pay matters in attracting good people into the profession and keeping them around, it's less about how they are paid than how much. Furthermore, the average salaries of the teaching profession, with respect to other labor market opportunities, can substantively affect the quality of entrants to the teaching profession, applicants to preparation programs, and student outcomes. Diminishing resources for schools can constrain salaries and reduce the quality of the labor supply. Further, salary differentials between schools and districts might help to recruit or retain teachers in high-need settings. So, too, does investment in improved working conditions, from infrastructure to smaller class sizes and total student loads. In other words, resources for teacher quality matter. Ample research indicates that children in smaller classes achieve better outcomes, both academic and otherwise, and that class-size reduction can be an effective strategy for closing racial or socioeconomic achievement gaps (U.S. Department of Education et al. 2003). While it's certainly plausible that other uses of the same money might be equally or even more effective, there is little evidence to support this. For example, while we are quite confident that higher teacher salaries may lead to increases in the quality of applicants to the teaching profession and increases in student outcomes, we do not know whether the same money spent toward salary increases would achieve better or worse outcomes if it were spent toward class size reduction. Some have raised concerns that large-scale class-size reductions can lead to unintended labor market consequences that offset some of the gains attributable to class-size reduction (such as the inability to recruit enough fully qualified teachers). For example, studies of California's statewide class-size reduction initiative suggest that as districts across the socioeconomic spectrum reduced class sizes, fewer high-quality teachers were available in high-poverty settings (Jepsen and Rivkin 2002).9 While it would be useful to have more precise cost-benefit analyses regarding the tradeoffs between applying funding to class-size reduction versus increased compensation (Ehrenberg et al. 2001), the preponderance of existing evidence suggests that the additional resources expended on class-size reductions do produce positive effects. Both reductions to class sizes and improvements to competitive wages can yield improved outcomes, but the gains in efficiency of choosing one strategy over the other are unclear, and local public school districts rarely have complete flexibility to make tradeoffs because class-size reduction may be constrained by available classrooms (Baker and Welner 2012). Smaller class sizes and reduced total student loads are a relevant working condition simultaneously influencing teacher recruitment and retention (Loeb et al. 2005; Isenberg 2010). That is, providing smaller classes may partly offset the need for higher wages for recruiting or retaining teachers. High-poverty schools require both strategies rather than an either-or proposition when it comes to smaller classes and competitive wages. As discussed above, achieving equal educational opportunity requires leveraging additional real resources—lower class sizes and more intensive support services—in high-need settings. Merely achieving equal-quality real resources, including equally qualified teachers, likely requires higher competitive wages, not merely equal pay in a given labor market. As such, higher-need settings may require substantially greater financial inputs than lower-need settings. Lacking sufficient financial inputs to do both, districts must choose one or the other. In some cases, higher need districts may lack sufficient resources to reduce class sizes or provide more intensive support. In this chapter, we explore the relationship between financial inputs and these tradeoffs, both within and across states, and over time. Specifically, we address the following questions: What patterns in national and state funding equity and adequacy do we see over the last two decades? What patterns do we find in access to important school resources, namely wage competitiveness and staffing ratios, over the same time period? What is the relationship between the adequacy and equity of school funding and access to real resources (teacher wages, staffing ratios, and class sizes)? Measuring Fiscal Input as Well as Real Resource Equity and Adequacy In this section, we draw on several national data sources to develop indicators of (a) school funding levels and distributions, (b) staffing levels and distributions and (c) relative wage levels and distributions (see Appendix (Table 4A.1) for full list of data sources, years, and measures). Ultimately, our goal is to examine the levels and distributions of fiscal input, staffing, and wages and discern their relationship. Our following analyses use national data sources over time to draw the various connections displayed in Fig. 4.1. First, the amount of effort a state puts forth, in addition to wealth and income, influences the level of resources made available to schools. Revenues available to schools translate to expenditures, and those expenditures may be leveraged to support more competitive wages, hiring and retaining more staff, or both. While we do not in this chapter include measures that connect inputs to student outcomes, we do expect staffing quantities and qualities to substantively influence those outcomes. We also document the relationships between financial resources and the real resources purchased with those financial resources. We explore these linkages in terms of state average levels of resources and within-state distributions of those resources with respect to concentrations of child poverty across districts. Fig. 4.1 Conceptual map of fiscal inputs & real resources These relationships, while relatively straightforward, have not been systematically documented across all states over time in recent years.10 Specifically, there is little documentation of the relationship across states between the level of commitment made by states to their public schooling systems and the average competitiveness of teacher wages, and little documentation of the extent to which differences in and changes in spending levels translate to changes in staffing ratios and class sizes.11 Evaluating Funding Levels and Fairness We begin with our model for estimating levels and variation in school districts' state and local revenue. Our objectives are twofold: first, to compare across states the amount a school district would be expected to receive in state and local revenue (and current operating expenditure) if the district was of a given enrollment size (economies of scale) and population density, faced national average labor costs, and served a population with relatively average child poverty levels; second, to evaluate within states the amount that a school district would be expected to receive in state and local revenue (and current operating expenditure) at varied levels of child poverty, holding constant labor costs, district enrollment size, and population density. The goal here is to make more reasonable comparisons of revenue and expenditure levels across local public school districts from one state and to another. So adjustments are made accordingly in our models. Average spending per pupil might be higher in states with higher labor costs. To compare the purchasing power of that spending, we adjust for those cost differences. Average spending per pupil might also be higher in states where more children attend school in population-sparse, small, rural districts. Thus, we compare spending for districts of otherwise similar size and population density across states—a "what if" analysis assuming a district size of 2000 or more pupils with average population density. Similarly, unified K-12 districts might have different average spending than K-8 or high school districts; thus we base our comparisons on unified K-12 districts. Finally, we compare revenue and spending predictions for districts of similar child poverty rates, as child poverty influences the costs of achieving common outcome goals (Duncombe and Yinger 2005). For both objectives, we use a 20-year (1993–2012) set of local public school district data to which we fit the following model: $$ \begin{array}{c}\mathrm{Funding}\kern0.24em \mathrm{per}\kern0.24em \mathrm{Pupil}=f\Big(\mathrm{Regional}\kern0.24em \mathrm{Competitive}\kern0.24em \mathrm{Wages},\kern0.24em \mathrm{District}\kern0.24em \mathrm{Size}\times \\ {}\kern5.73em \mathrm{Population}\kern0.24em \mathrm{Density},\kern0.24em \mathrm{Grade}\kern0.24em \mathrm{Range}\kern0.24em \mathrm{Served},\\ {}\kern3.23em \mathrm{State}\times \mathrm{Census}\kern0.24em \mathrm{Child}\kern0.24em \mathrm{Poverty}\kern0.24em \mathrm{Rate})\end{array} $$ To account for variation in labor costs, we use the NCES Education Comparable Wage Index, updated through 2012 by the author of the original index (Extending the NCES CWI 2013). We impute additional years as necessary (see Appendix). We account for district size with a series of dummy variables indicating that a district has (a) under 100 pupils, (b) 101–300 pupils, (c) 301–600 pupils, (d) 601–1200 pupils, (e) 1201–1500 pupils, and (f) 1501–2000 pupils, where the baseline comparison group are districts with over 2000 pupils, a common reference point for scale efficiency. The district size factor is interacted with county-level population density to further correct for cost differences associated with small, sparse, rural districts, separating them from segregated enclaves in population-dense metropolitan areas. Finally, we interact state dummy indicators with district level child poverty rate to estimate the within-state, cross-district distribution of funding with respect to child poverty. The regression model is weighted by district enrollment size. We then use this model to generate predicted values of the funding measure—total state and local revenues per pupil and current operating spending per pupil—at varied levels of child poverty for each state at national average labor costs, average population density, and efficient size. To compare levels of funding across states, we compare predicted revenue and spending at 10 % census poverty, holding other factors constant. To compare distributions, we construct what we call a "fairness ratio." It is the ratio of the predicted funding level for a high poverty district (30 % census poverty, equivalent to about 60–80 % qualified for the National School Lunch Program), relative to that of a low poverty (0 % census poverty) district. A fairness ratio above 1 indicates that the state provides a greater level of resources to high poverty districts than low poverty districts, while a ratio below 1 indicates that high-poverty districts have fewer resources. $$ \mathrm{Fairness}\kern0.24em \mathrm{Ratio}=\frac{\mathrm{Predicted}\kern0.24em \mathrm{Funding}\kern0.36em \mathrm{at}\kern0.24em 30\%\kern0.24em \mathrm{Poverty}}{\mathrm{Predicted}\kern0.24em \mathrm{Funding}\kern0.24em \mathrm{at}\kern0.24em 0\%\kern0.24em \mathrm{Poverty}} $$ Evaluating Resource Levels and Fairness The next step is to estimate levels of real resources in otherwise comparable settings across states and to estimate variations in real resources with respect to child poverty. Estimating Staffing Levels and Distributions Our approach to modeling staffing levels follows the one we used to model funding levels. We use annual data from 1993 to 2012 and apply the same model as above, except putting numbers of teachers per 100 pupils on the left-hand side. Again, the premises are: overall staffing ratios might be higher on average (better) in states with more children in small, low-population-density districts; staffing ratios (given spending levels) might be lower (worse) in states facing higher labor costs; and staffing ratios should vary with respect to children's educational needs, as proxied by district poverty measures. $$ \begin{array}{c}\mathrm{Teachers}\kern0.24em \mathrm{per}\kern0.24em 100\kern0.24em \mathrm{Pupils}=f\Big(\mathrm{Regional}\kern0.24em \mathrm{Competitive}\kern0.24em \mathrm{Wages},\kern0.24em \mathrm{District}\kern0.24em \mathrm{Size}\times \\ {}\kern5.4em \mathrm{Population}\kern0.24em \mathrm{Density},\kern0.24em \mathrm{Grade}\kern0.24em \mathrm{Range}\kern0.24em \mathrm{Served},\\ {}\kern3.6em \mathrm{State}\times \mathrm{Census}\kern0.24em \mathrm{Child}\kern0.24em \mathrm{Poverty}\kern0.24em \mathrm{Rate})\end{array} $$ We then use this model to (a) generate predicted values of teachers per 100 pupils at given levels of poverty, within each state and (b) generate a staffing fairness ratio like our funding fairness ratio. Evaluating the Average Competitiveness of Teacher Wages As discussed above, one way in which teacher wages matter is that the average relative wage of teachers versus other professions in a given labor market may influence the quality of those entering and staying within the teaching workforce. Here, we use the U.S. Census Bureau's American Community Survey (ACS) annual data from 2000 to 2012 to estimate, for each state, the ratio of the expected income from wages for an elementary or secondary school teacher to the expected income from wages for a nonteacher at the same age and degree level. Of primary interest here are the differences in competitive wage ratios across states, and ultimately, whether states that allocate more resources to education generally are able to achieve more competitive teacher wages. Here, we compare annual wages of teachers to nonteachers, but we also note that variation across states remains similar with a comparison of weekly or monthly wages, although teacher wages do become more comparable to nonteacher wages. Recall that literature on teacher wages and teacher quality suggests that the more competitive the teacher wage (relative to other career options), the higher the expected quality of entrants to the profession. To generate our competitive wage ratios, we begin with a regression model fit to our 13-year set of ACS data, in which we estimate the relationship between "income from wages" as the dependent variable, a series of state indicators, and an indicator that the individual is a teacher (occupation) in elementary or secondary education (industry). We include an indicator of the teacher's age and education level, and we include measures of hours worked per week and weeks worked per year but do not equate our predicted wages by holding constant these latter two factors in the analyses. We estimate the following model: $$ \begin{array}{c}\kern0.5em \mathrm{Income}\;\mathrm{from}\;\mathrm{Wages}=f\Big(\mathrm{State}\;\mathrm{Place}\;\mathrm{of}\;\mathrm{Work},\;\mathrm{k}12\;\mathrm{Teacher},\;\mathrm{Age},\;\mathrm{Education}\;\mathrm{Level},\\ {}\mathrm{Hours}\;\mathrm{per}\;\mathrm{Week},\;\mathrm{Week}\mathrm{s}\;\mathrm{per}\;\mathrm{Year})\end{array} $$ We use this model to generate predicted values for teacher and nonteacher wages at specific age points, for individuals with a bachelor's degree, and then take the ratio of teacher to nonteacher wages. Of particular interest are (a) the differences in the teacher/nonteacher wage ratio across states and (b) the changes over time within states in the teacher/nonteacher wage ratio. That is, are teacher wages more competitive in some states than others? And have teachers generally gained or lost ground? Are these differences in wage competitiveness and gains or losses related back to state funding levels? Estimating Sensitivity of Resources to Funding Across Districts For these last two analyses, we link our data on district-level finances with teacher-level data from the NCES Schools and Staffing Survey (SASS), which includes over 40,000 public school teachers, surveyed in waves on approximately 4-year cycles. We use data from the 1993–1994, 1999–2000, 2003–2004, 2007–2008, and 2011–2012 cycles. Because personnel costs vary across labor markets within states, it is important when evaluating either teacher quantity measures or teacher wages to make direct comparisons only among districts facing similar personnel costs. Further, because livable wages similarly vary across labor markets, but income thresholds for determining whether families are in poverty do not, it also makes sense to compare poverty rates only across local public school districts sharing a labor market (Baker et al. 2013b). A convenient solution is to re-express per-pupil spending measures and child poverty rates for each school district in the nation relative to (as a ratio to) the average per-pupil spending and child poverty rates for all districts sharing that same labor market. We use a similar strategy for evaluating variations in both class sizes and competitive teacher wages, with the latter comparisons requiring a preliminary step of determining the wage for teachers of comparable qualifications and contractual obligations. This analysis is different from the previous analyses because we are working with samples of teachers and schools where total sample sizes and the distribution of sampled teachers for many states are insufficient for characterizing cross-district equity. As a result, we ask whether nationally, across nonrural labor markets, there exists the expected relationship between the relative funding available to local public school districts, and the class sizes and wages of teachers in those school districts. That is, do schools in districts with better funding tend to have smaller class sizes, more competitive wages, or both? Class Sizes To estimate the sensitivity of class size variation to spending variation across schools within labor markets, we estimate separate models of departmentalized and self-contained class sizes. We estimate class sizes as a function of (a) relative spending, (b) relative poverty, and (c) grade level taught. $$ \mathrm{Class}\kern0.24em \mathrm{Size}=f\left(\mathrm{Relative}\kern0.24em \mathrm{Spending},\kern0.24em \mathrm{Relative}\kern0.24em \mathrm{Poverty},\kern0.24em \mathrm{Grade}\kern0.24em \mathrm{Level}\right) $$ Teacher Wages While the previous wage indicator compared teacher salaries to nonteachers, this dataset allows us to compare wages among similar teachers within labor markets, but in different school districts. The relative competitiveness of teacher salaries is then examined in the context of the relative poverty and relative funding levels of school districts. This analysis offers further evidence as to whether districts can leverage funding resources to provide more competitive wages to teachers in other, less resourced districts. In other words, does the distribution of funding affect districts' ability to offer competitive wages, and therefore influence the distribution of quality teachers across districts? We begin by estimating, within each labor market in each state, the relative wage of teachers with a specific set of credentials. We focus on full-time classroom teachers, estimating their salaries (base pay from school year teaching) as a function of (a) experience and (b) degree level within (c) labor market (as defined in the Education Comparable Wage Index, aligned with metropolitan and micropolitan statistical areas). We exclude teachers outside of metropolitan and micropolitan areas because of small sample sizes within rural labor markets. We estimate separate models for each SASS wave. $$ \mathrm{Salary}=f\left(\mathrm{experience},\kern0.24em \mathrm{degree},\kern0.24em \mathrm{labor}\kern0.24em \mathrm{market}\right) $$ Next, we generate the predicted salary for each teacher in each labor market, identifying the average wage for a teacher at given experience and degree level across all schools in each labor market. We then take the ratio of actual salary to predicted salary, which indicates for all teachers in the sample whether their salary is higher or lower than expected. Aggregated to the school or district level, we have a measure of the relative competitiveness of teacher wages in each school or district compared to other schools or districts sharing the same labor market. The next step is to estimate the sensitivity of these wage variations to spending variations across districts sharing the same labor market. We do this with the teacher-level data, linked to a measure of the relative spending of their school district in its labor market, and the relative poverty rate of the school district in its labor market. We take the district's current operating spending per pupil as a ratio to the average of all other districts in the labor market and do the same with district poverty rate. We estimate together the relationship between relative spending and poverty and the relative competitiveness of teacher's salaries. We include additional dummy variables for grade level taught, again including only nonrural full-time teachers: $$ \begin{array}{c}\mathrm{Salary}\kern0.24em \mathrm{Competitiveness}=f\ \Big(\mathrm{Relative}\kern0.24em \mathrm{Spending},\kern0.24em \mathrm{Relative}\kern0.24em \mathrm{Poverty},\\ {}\kern2.99em \mathrm{Grade}\kern0.24em \mathrm{Level}\kern0.24em \mathrm{Taught})\end{array} $$ We begin by reviewing longitudinal trends in funding levels and funding fairness. We also validate the extent to which state school funding levels are associated with differences in fiscal effort—or the share of gross state product allocated to schools. Next, we summarize changes to the distribution of funding across school districts within states, specifically evaluating the funding fairness profiles of states and how those profiles have changed over the past 20 years. We then proceed to explore average competitive wage levels across states from 2000 to 2012, and pupil-to-teacher ratios across states over the full 20-year period. We subsequently explore the connections between measures of the level and distribution of financial inputs to schooling, and the level and distribution of staffing quantities and staffing qualities. Specifically, we evaluate whether state spending levels are associated with the state average competitiveness of teacher wages and state average staffing ratios (pupil-to-teacher ratios). Then we explore whether within-state distributions of financial inputs to schooling are associated with within-state distributions of staffing ratios, class sizes, and competitive wages. Adequacy and Equity of Fiscal Inputs Figure 4.2 presents the national averages of current spending per pupil and state and local revenues per pupil, adjusted for changes in labor costs by dividing each district's revenue or spending figure by the comparable wage index for that district. Both revenues and spending are included to illustrate how the two largely move together over time, as one would expect. The Education Comparable Wage Index adjusts for both regional variation in labor costs (input prices) and inflationary change in labor costs. Figure 4.2 shows that on average using district level data weighted by student enrollments, state and local revenues and per pupil spending are up approximately 4.5–5.5 % over the period, reaching a high around 2008 and returning to levels comparable to 2000 by 2012. Input price adjusted revenue and spending Figure 4.3 summarizes the trends in predicted state and local revenue levels for all states, organized by regions. These are combined state and local revenues per pupil, predicted for a district with 10 % child poverty, of 2000 or more pupils at constant labor costs (though not fully corrected for inflation). Of particular interest are the trends, divergences, and convergences among regionally contiguous states. A notable feature of these figures is the sharp shift in growth trajectories that occurs in most states around 2009 as a function of the recession. New Jersey, for example, experienced a particularly strong downturn. Delaware is the only state in this mix to show no recovery as of yet. Related work has shown that these downturns were largely a function of sharp reductions in state aid, buffered in some cases by increases to local property taxes. But those shifts in responsibility from state funding onto local property tax have potential equity consequences. Average revenue may have rebounded with offsetting property tax increase, but inequity is likely to have increased as a result. Open image in new window Open image in new window Predicted state and local revenues over time by state Figure 4.4 illustrates the relationship in 2012 between the percent of gross state product expended on K-12 schools and the average level of state and local revenue. In short, higher effort states do have higher funding levels. Certainly, some relatively low fiscal capacity states like Mississippi apply average effort and still end up with low funding, while high fiscal capacity states like Wyoming or Connecticut are able to apply much lower effort and yield far greater resources. But effort matters above and beyond wealth and income. While some might assume that effort crept upward as fiscal capacity declined during the recession, this assumption is generally wrong. Political proclivity for cutting taxes has led, on average, to reductions in funding effort. Forty-one states reduced effort from 2007 to 2012. Further, 5-year changes in effort are strongly associated with 5-year changes in revenue levels, as might be expected (correlation = .7 excluding Alaska). States that reduced effort generally reduced school revenues proportionately. Relationship between effort and revenue (Note: See Appendix (Table 4A.2) for full information by state) Current Expenditure "Fairness" (Spending Equity) So what then have been the consequences of the economic downturn for school spending fairness across states? That is, how have higher poverty districts been differentially affected when compared with lower poverty ones? Table 4.1 summarizes numbers of states where funding fairness improved (or not) over specific time periods over the past 20 years. Again, a funding fairness ratio of .95 means that a district with 30 % of children in poverty12 has only 95 % of the funding of a district with 0 % children in poverty. A fairness ratio of 1.05 indicates that a district with 30 % poverty has 5 % greater funding than a district with 0 % poverty. Numbers of states where funding fairness ratio has improved Initial fairness ratio among improved states # States that improved fairness <.95 .95–1.05 From 1993 to 2007 in particular, 40 different states experienced increased funding levels in higher poverty districts relative to lower poverty ones (only 33 sustained the pattern over the entire period from 1993 to 2012). But in the 5 years that followed, 30 states reduced funding fairness, with some of the greatest reductions coming in states that had previously experienced the greatest improvements, including New Jersey. Table 4.2 summarizes the state-by-state current expenditure fairness ratios and changes over time. As noted in Table 4.1, most states did improve their fairness ratios over the entire period, but many reduced fairness over the past 5 years. Massachusetts improved fairness at the outset of the period, as did New Jersey, but both states taper off in recent years. Other states like Pennsylvania started the period with relatively flat distributions (similar funding in higher and lower poverty districts) and then slid into more regressive distributions over time. Spending fairness indices for select years Fairness ratio current operating expenditures per pupil Change over time 20-year change 5-year change −0.27 Notably, these findings present a more positive light on funding progressiveness than those in the report Is School Funding Fair, because these figures are based on current operating spending per pupil, which includes the expenditure of federal funds. Those federal funds tend to lift (by around 5 %) the levels of funding in the highest poverty districts, thus improving the funding fairness index. Resource Models Relative Annual Wage of Teachers Table 4.3 summarizes changes to the state average competitiveness of teacher wages over the past 12 years, and then for the most recent 5 years. Wage competitiveness is expressed as a ratio of teacher wages to nonteacher wages. A ratio less than 1 means teachers earn less than comparable nonteachers. It's important to understand in this case that there are two moving parts—teacher wages and nonteacher wages. Teacher wages can become more competitive if they remain relatively constant but wages of others (at the same age and education level) decline. Teacher wages can become less competitive even if they appear to grow but do so more slowly than wages in other sectors. Put simply, it's all relative, but it is the relative wage that matters. From 2000 to 2012, teacher wages in every state became less competitive, based on our model, a finding that is consistent with similar work by Mishel et al. (2011). It would appear that over the last 5 years, only in Iowa did teacher wages become marginally more competitive. Over the 12-year period, the state average (unweighted) reduction in wage competitiveness was 12 %. Over the period from 2007 to 2012, the state average reduction in wage competitiveness was 8 %. Summary of changes in wage competitiveness # States that increased wage competitiveness State mean change (%) But, as can be seen in Table 4.4, these estimates tend to jump around, especially in low population states like Alaska. States with persistently noncompetitive teacher wages include Colorado and Arizona. Teacher wages have tended over time to be more competitive in rural states (where nonteacher wages aren't as high), including Montana and Wyoming. Average teacher wages in New York and Rhode Island have also tended to be more competitive, though data are inconsistent across years. Teacher/nonteacher wage ratios for select years Wage competitiveness ratio (Teacher/Nonteacher) (%) Change over time (%) Teachers per 100 Pupils Table 4.5 summarizes changes to the numbers of teachers per 100 pupils over time. Over the entire 20-year period, nearly all states increased numbers of staff per 100 pupils. The state average (unweighted) increase was approximately 1 additional teacher per 100 pupils, moving from about 5.5 to about 6.5 total teachers per 100 pupils. Most of those gains occurred prior to 2002. Over the past 10 years, state average staffing increases have been much more modest, and over the past 5 years, nonexistent. Summary of staffing level changes over time # States that improved staffing ratios State average change Table 4.6 displays state-by-state ratios of teachers per 100 pupils and changes in those ratios. States including Alabama and Virginia appear to have reduced teachers per 100 pupils by over 1.0 (or around 13–16 %). About half of states continued to increase numbers of teaching staff per 100 pupils. Notably, these figures change over time both as a function of changing numbers of staff and of changing numbers of pupils. States with constant staffing but declining enrollments will show increasing staffing ratios. States with increasing enrollment but no additional staff will show decreasing staffing ratios. Predicted staffing ratios for select years Relationships Across Adequacy (Level) Measures Here we explore the relationships among these indicators. Figure 4.5 conveys that states with higher per pupil spending tend to have more teachers per 100 pupils on average. This suggests that, on balance and across states, higher spending on schools is leveraged to increase staffing quantities. The next question is the extent to which these increased overall staffing quantities translate to decreased class sizes, where research literature tends to point to more positive effects on student outcomes. Spending levels and staffing levels 2011–2012 (Note: See Appendix (Table 4A.2) for full information by state) Figure 4.6 shows that these differences in overall staffing ratios do translate to smaller class sizes, both for self-contained elementary classes and for secondary departmentalized settings. That is, while some may contest the direct relevance of pupil-to-teacher ratios as having influence on schooling quality, the availability of more staff certainly provides the opportunity for, and eventual reality of, smaller classes. Relating total staffing and class size (Note: See Appendix (Table 4A.2) for full information by state) Figure 4.7 shows that variation across states in current spending levels also translates to variation in the competitiveness of teacher wages. We have already seen that states where spending is higher tend to have more teachers per pupil and smaller class sizes, consuming a share of the funds that might also be used for providing more competitive wages. Spending levels and competitive wages (Note: See Appendix (Table 4A.2) for full information by state) Figure 4.7 shows that states where school districts spend more also tend to have teacher wages more comparable to nonteachers at the same age and degree level. In other words, combining Figs. 4.5 through 4.7, it would appear that much of the cross-state variation in school spending, which is driven by cross-state variation in fiscal effort, translates into real resource differences likely to matter—more competitive wages, lower pupil-to-teacher ratios, and smaller classes. Figure 4.8 explores the within-state distribution of resources, asking whether there exists a relationship between current spending fairness across states' school districts and staffing fairness. That is, if current spending per pupil is higher in higher poverty districts within a given state, are staffing concentrations also higher—and vice versa? Do states that provide for fairer distribution of funding yield, on average, fairer distribution of staffing ratios? The answer to that question as seen in Fig. 4.8 is, setting aside outliers (North Dakota and Alaska), yes. See Appendix (Table 4A.2) for full information by state. Spending fairness and staffing fairness 2011–2012 (Note: See Appendix (Table 4A.2) for full information by state) Each of the above graphs and related correlations expresses only the relationship across states within the most recent year of data. These graphs do not speak to the question of whether increases or decreases in funding translate to increases or decreases in real resource levels or fairness. Unfortunately, our only real resource measure collected annually from 1993 to 2012 at the district level—thus useful for evaluating both predicted state levels and within-state variation over time—is our pupil-to-teacher ratio measure. Table 4.7 shows the results of a 20-year fixed effects model (also random effects) of the relationship between annual changes in spending levels and fairness, and pupil-to-teacher ratio fairness. The fixed effects model evaluates year-over-year changes within states. That is, to what extent do within-state changes in spending result in within-state changes in pupil-to-teacher ratio distributions? The random effects model combines evaluation of within-state differences over time with across-state differences. Cross-state differences evaluate the extent that states with fairer (or less fair) distributions of spending have fairer (or less fair) distributions of pupil-to-teacher ratios. R-squared values display the extent of variance that is explained by the models within states over time (averaged across states) and between states at each point in time (averaged over time). The more substantial variations across states than within any state over time yield more predictable variation (r-squared = .694). Fixed effects model of pupil-to-teacher ratio fairness Fixed effects Random effects N = 50×20 years DV = Teachers per 100 pupils fairness Coef. Std. err. P > t Spending measures Spending fairness R-Squared ap < .01 In short, the model shows that when spending fairness improves, so too do staffing ratios in higher poverty districts. Each unit increase in funding fairness (increase in relative spending of higher poverty districts compared to lower poverty districts) translates to an additional 0.4 units of staffing per 100 pupils. Put into more realistic terms, an increase in fairness ratio from 1.0 (flat funding) to 1.25 (modestly progressive funding) leads to an increase in 0.1 of a teacher per 100 pupils in high poverty, relative to low poverty districts. These differences exist across states but also occur within states over time. The magnitude of the change over time effect is only slightly smaller than the combined change over time and cross sectional effect. In other words, whether across states at all time periods, or within states over time, the responsiveness of pupil-to-teacher ratio fairness to spending fairness is relatively consistent. To summarize, if we target additional funding to higher poverty settings, that funding translates to increased numbers of teachers and a fairer statewide distribution of staffing ratios in those districts. Of course, the inverse also follows. Figures 4.9 and 4.10 explore within year, over time, relationships between within-state variation in current spending and within-state (within-labor market) variation in (a) class sizes and (b) teacher wages (conditioned on age, experience, teaching assignment, grade level). Both figures are based on within-year (within SASS wave) models. Figure 4.9 shows that within-year (except for 2007–2008) class sizes across districts within metropolitan areas are sensitive to relative spending differences across districts within metropolitan areas. For example, as we move from average to double the average current spending, in 2011–2012, departmentalized class sizes are reduced by over seven pupils. More realistically, as a district moves from average spending for its labor market to 20 % above average, class sizes are reduced by about 1.4 students (20 % of 7). Such reductions are sufficient to be policy relevant. Recall that these estimates are conditioned on grade level taught and relative district poverty rate and include only nonrural schools. Change in class size for 1 unit change in relative spending and relative poverty (Note: Solid colored bars indicate statistically significant class size differences) Fig. 4.10 Change in salary competitiveness for 1 unit change in relative spending (Note: Solid colored bars indicate statistically significant salary differences) Figure 4.10 displays the relationship between the competitiveness of teacher salaries to other teachers with similar credentials in similar jobs on the same labor market. Teachers in districts in a given labor market where per-pupil spending is double the labor market average have 20 % higher wages than similar teachers in average spending districts on average in 2011–12. Taken together, Figs. 4.9 and 4.10 support the conclusion that spending variation translates to meaningful real resource variation across children and across districts within the same labor market. These differences are significant, and the resources in question are meaningful. Conclusions and Implications The analyses presented validate the conclusion that variations in available revenues and expenditures are associated with variations in children's access to real resources—as measured by the competitiveness of the wages paid to their teachers and by pupil-to-teacher ratios and class sizes. Put simply: States that apply more effort—spending a greater share of their fiscal capacity on schools—generally spend more on schools. These higher spending levels translate into higher statewide staffing levels—more certified teaching staff per pupil. These higher staffing levels translate to smaller statewide class sizes. These higher spending levels translate to more competitive statewide teacher wages. Districts that have higher spending levels within states tend to provide smaller class sizes than surrounding districts with lower spending levels. Districts that have higher spending levels within states tend to provide more competitive teacher salaries than surrounding districts with lower spending levels. These relationships hold (a) across states, (b) within states over time as resource levels change and (c) across districts within states and labor markets. The connections identified here between school funding and real resource access speak to both equity and adequacy concerns. Equity and adequacy of financial inputs to schooling across states are required if we ever expect to achieve more equitable access to a highly qualified teacher workforce (as dictated in part by the competitiveness of their compensation) and reasonable class sizes. The loftier goal of equal educational opportunity—or equal opportunity across children to strive for common outcome goals—requires not merely equal real resources, but appropriately differentiated resources, including smaller classes and additional support services with at least equally qualified teachers and other school staff. While the press is on to nationalize those outcome expectations through Common Core Standards and the assessments by which we measure them, our current system for financing schools is in full retreat from the equity and adequacy gains made between 1993 and 2007. The recent recession yielded an unprecedented decline in public school funding fairness. Thirty-six states had a 3-year average reduction in current spending fairness between 2008–2009 and 2010–2011, and 32 states had a 3-year average reduction in state and local revenue fairness over that same time period. Even after the partial rebound of 2012, 30 states remained less fair in current spending than in 2007. Nearly every state has experienced a long-term (10-year) decline in the competitiveness of teacher wages. Between 2007 and 2012, 33 states saw increases in pupil-to-teacher ratios. Notably, while equity overall took a hit between 2007 and 2012, the initial state of funding equity varied widely at the outset of the period, with Massachusetts and New Jersey being among the most progressively funded states in 2007. Thus, they arguably had further to fall. Funding equity for many states has barely budged over time and remained persistently regressive, for example, in Illinois, New York, and Pennsylvania. Potential influences on these patterns are also elusive and widely varied. In Missouri, we see the 1990s influence of desegregation orders, which capitalized on the state's matching aid program to generate additional revenue in Kansas City and St. Louis driving spending progressiveness, but when the state adopted a need-weighted foundation aid formula in 2006, spending continued to become more regressive. We see the more logical influence of school finance reforms in Massachusetts in the early 1990s and in New Jersey in the late 1990s after court orders targeting additional funds to needy districts, yielding an overall pattern of progressiveness. Court orders in New York state (2006) appears to have had little or no influence on equity, and the influence of court orders over time in Kansas have moved the needle only slightly. A better understanding of the role of judicial involvement requires significant additional exploration of these data linked to information on both judicial activity and legislative reforms. Finally, the coming years will tell us both whether state school finance systems can rebound from the effects of the downturn or whether these effects have become permanent, and they will inform us about the consequences for short- and long-term student outcomes. A significant body of literature has now shown the positive effects of equity and adequacy improvements of the prior 40-plus years of school finance reform. Similar methods applied years from now may reveal the deleterious influences of these dark ages of American public school finance. As per the court's declaration: "An efficient system of education must have as its goal to provide each and every child with at least the seven following capacities: (i) sufficient oral and written communication skills to enable students to function in a complex and rapidly changing civilization; (ii) sufficient knowledge of economic, social, and political systems to enable the student to make informed choices; (iii) sufficient understanding of governmental processes to enable the student to understand the issues that affect his or her community, state, and nation; (iv) sufficient self-knowledge and knowledge of his or her mental and physical wellness; (v) sufficient grounding in the arts to enable each student to appreciate his or her cultural and historical heritage; (vi) sufficient training or preparation for advanced training in either academic or vocational fields so as to enable each child to choose and pursue life work intelligently; and (vii) sufficient levels of academic or vocational skills to enable public school students to compete favorably with their counterparts in surrounding states, in academics or in the job market. Rose v. Council for Better Education, Inc., 790 S.W.2d 186, 212 (Ky. 1989). https://casetext.com/#!/case/rose-v-council-for-better-educ-inc. See PARCC website at http://www.parcconline.org. Roy (2011) published an analysis of the effects of Michigan's 1990s school finance reforms that led to a significant leveling up for previously low-spending districts. Roy, whose analyses measure both whether the policy resulted in changes in funding and who was affected, found that the proposal "was quite successful in reducing interdistrict spending disparities. There was also a significant positive effect on student performance in the lowest-spending districts as measured in state tests." (p. 137). Papke (2005), also evaluating Michigan school finance reforms from the 1990s, found that "increases in spending have nontrivial, statistically significant effects on math test pass rates, and the effects are largest for schools with initially poor performance." (p. 821). Most recently, Hyman (2013) also found positive effects of Michigan school finance reforms in the 1990s but raised some concerns regarding the distribution of those effects. Hyman found that much of the increase was targeted to schools serving fewer low-income children. But the study did find that students exposed to an additional "12 %, more spending per year during grades four through seven experienced a 3.9 % point increase in the probability of enrolling in college, and a 2.5 % point increase in the probability of earning a degree." (p. 1). "The magnitudes imply a $1000 increase in per-pupil spending leads to about a third to a half of a standard-deviation increase in average test scores. It is noted that the state aid driving the estimates is targeted to under-funded school districts, which may have atypical returns to additional expenditures." (Guryan 2001, 1). Downes had conducted earlier studies of Vermont school finance reforms in the late 1990s (Act 60). In a 2004 book chapter, Downes noted, "All of the evidence cited in this paper supports the conclusion that Act 60 has dramatically reduced dispersion in education spending and has done this by weakening the link between spending and property wealth. Further, the regressions presented in this paper offer some evidence that student performance has become more equal in the post-Act 60 period. And no results support the conclusion that Act 60 has contributed to increased dispersion in performance." (2004, 312). A few years later, Hanushek paraphrased this conclusion in another widely cited article as "Variations in school expenditures are not systematically related to variations in student performance" (Hanushek 1989). Hanushek describes the collection of studies relating spending and outcomes as follows: "The studies are almost evenly divided between studies of individual student performance and aggregate performance in schools or districts. Ninety-six of the 147 studies measure output by score on some standardized test. Approximately 40 % are based upon variations in performance within single districts while the remainder looks across districts. Three-fifths look at secondary performance (grades 7–12) with the rest concentrating on elementary student performance" (Fig. 25). Hanushek (2006) goes so far as to title a concurrently produced volume on the same topic "How School Finance Lawsuits Exploit Judges' Good Intentions and Harm Our Children" [emphasis ours]. The premise that additional funding for schools often leveraged toward class size reduction, additional course offerings or increased teacher salaries, causes harm to children is, on its face, absurd. The book, which implies as much in its title, never once validates that such reforms ever cause observable harm. Rather, the title is little more than a manipulative attempt to instill fear of pending harm in the mind of the uncritical spectator. The book also includes two examples of a type of analysis that occurred with some frequency in the mid-2000s and that also had the intent of showing that school funding doesn't matter. These studies would cherry pick anecdotal information on either or both of the following: (a) poorly funded schools that have high outcomes, and (b) well-funded schools that have low outcomes (see Evers and Clopto 2006; Walberg 2006). "The results show that, all else equal, smaller classes raise third-grade mathematics and reading achievement, particularly for lower-income students. However, the expansion of the teaching force required to staff the additional classrooms appears to have led to a deterioration in average teacher quality in schools serving a predominantly Black student body. This deterioration partially or, in some cases, fully offset the benefits of smaller classes, demonstrating the importance of considering all implications of any policy change" (p. 1). For further discussion of the complexities of evaluating class size reduction in a dynamic policy context, see Sims 2008, 2009; Chingos 2010. For an earlier analysis that parallel school funding disparities and real resource disparities, see Corcoran et al. 2004. In the absence of clear documentation of these rather obvious connections between fiscal constraints, wages, and class sizes, a body of literature has emerged that suggests that no such linkage exists, that local public school districts of all types possess more than sufficient resources to achieve competitive, restructured compensation systems, or entirely different service delivery approaches altogether with no consequences resulting from resource reallocation. During the economic downturn, much of that non-peer-reviewed, think-tank-sponsored literature found its way to a special section on the U.S. Department of Education website dedicated to improving educational productivity. Baker and Welner (2012) provide a substantive critique of the reports posted on the website. Census poverty rate, where a 30 % rate is equivalent to about 80 % free or reduced priced lunch. Table 4A.1 Data sources, years, and measures Years available Years imputed District level fiscal measures Per pupil spending U.S. Census F-33 Public Elementary-Secondary Education Finance Survey (F-33)a State revenue Local revenue Federal revenue District characteristics National Center for Education Statistics (NCES), Common Core of Data (CCD)b Grade ranges Regional cost variation Education comparable wage index Taylor's Extended NCES Comparable Wage Index 1993–1996, 2012 Population needs/characteristics Child povertyc U.S. Census Small Area Income and Poverty Estimatesd 1995, 1997, 1999, 2000–2012 1993–1994, 1996, 1998 Teacher/nonteacher wages Individual worker IPUMS Census & American Community Survey Wages/compensation Teacher linked to school/district (sample) NCES Schools and Staffing Surveye 1993–1994, 1999–2000, 2003–2004, 2007–2008, 2011–2012 School (sample) NCES Schools and Staffing Survey aU.S. Census. Public Elementary–Secondary Education Finance Data bU.S. Department of Education, National Center for Education Statistics. Common Core of Data cSee Baker et al. (2013b) dU.S. Census. Small Area Income and Poverty Estimates, School District Data Files eU.S. Department of Education, National Center for Education Statistics. Schools and Staffing Survey Summary data by state Effort & revenue Spending & staffing Effort index (%) State & local revenue ($) Spending level ($) Staffing level Class size – departmental Class size – Self contained Wage ratio (%) Staffing Fairness Baker, Bruce D. 2012. Revisiting the age-old question: Does money matter in education? Washington, DC: Albert Shanker Institute.Google Scholar Baker, Bruce D. 2014. Evaluating the recession's impact on state school finance systems. Education Policy Analysis Archives 22(91). doi:http://dx.doi.org/10.14507/epaa.v22n91.2014 Baker, Bruce, and Preston Green. 2009a. Conceptions, measurement and application of educational adequacy standards. In AERA handbook on education policy, ed. David N. Plank. New York: Routledge.Google Scholar Baker, Bruce, and Preston Green. 2009b. Does increased state involvement in public schooling necessarily increase equality of educational opportunity? In The rising state: How state power is transforming our nation's schools, ed. Bonnie C. Fuscarelli and Bruce S. Cooper, 133. Albany: State University of New York Press.Google Scholar Baker, Bruce D., and Kevin G. Welner. 2011. School finance and courts: Does reform matter, and how can we tell. Teachers College Record 113(11): 2374–14.Google Scholar Baker, Bruce D., and Kevin G. Welner. 2012. Evidence and rigor scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher 41(3): 98–101.CrossRefGoogle Scholar Baker, Bruce D., Joseph O. Oluwole, and Preston C. Green III. 2013a. The legal consequences of mandating high stakes decisions based on low quality information: Teacher evaluation in the race-to-the-top era. Education Policy Analysis Archives 21(5).Google Scholar Baker, Bruce D., Lori Taylor, Jesse Levin, Jay Chambers, and Charles Blankenship. 2013b. Adjusted poverty measures and the distribution of title I aid: Does Title I really make the rich states richer? Education Finance and Policy 8(3): 394–417.CrossRefGoogle Scholar Baker, Bruce D., David G. Sciarra, and Danielle Farrie. 2014. Is school funding fair? A national report card. Newark: Education Law Center. http://www.schoolfundingfairness.org. Borman, Geoffrey D., and Maritza Dowling. 2010. Schools and inequality: A multilevel analysis of Coleman's equality of educational opportunity data. Teachers College Record 112(5): 1201–1246.Google Scholar Card, David, and A. Abigail Payne. 2002. School finance reform, the distribution of school spending, and the distribution of student test scores. Journal of Public Economics 83(1): 49–82.CrossRefGoogle Scholar Chingos, Matthew M. 2010. The impact of a universal class-size reduction policy: Evidence from Florida's statewide mandate (Program on Education Policy and Governance Working Paper 10–03). Cambridge, MA: Harvard University.Google Scholar Clune, William H. 1994. The shift from equity to adequacy in school finance. Educational Policy 8(4): 376–394.CrossRefGoogle Scholar Corcoran, Sean, William N. Evans, Jennifer Godwin, Sheila E. Murray, and Robert M. Schwab. 2004. The changing distribution of education finance, 1972 to 1997. In Social inequality, ed. Kathryn M. Neckerman. New York: Russell Sage Foundation.Google Scholar Deke, John. 2003. A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas. Economics of Education Review 22(3): 275–284.CrossRefGoogle Scholar Downes, Tom A. 2004. School finance reform and school quality: Lessons from Vermont. In Helping children left behind: State aid and the pursuit of educational equity, ed. John Yinger. Cambridge, MA: MIT Press.Google Scholar Downes, Tom A., Jeff Zabel, and Dana Ansel. 2009. Incomplete grade: Massachusetts education reform at 15. Boston: MassINC. http://www.massinc.org/Research/Incomplete-Grade.aspx. Duncombe, William, and John Yinger. 2005. How much more does a disadvantaged student cost? Economics of Education Review 24(5): 513–532.CrossRefGoogle Scholar Ehrenberg, Ronald G., Dominic J. Brewer, Adam Gamoran, and J. Douglas Willms. 2001. Class size and student achievement. Psychological Science in the Public Interest 2(1): 1–30.CrossRefGoogle Scholar Evers, Williamson M., and Paul Clopto. 2006. High-spending, low-performing school districts. In Courting failure: How school finance lawsuits exploit judges' good intentions and harm our children, ed. Eric Hanushek, 103–194. Palo Alto: Hoover Institution Press.Google Scholar Ferguson, Ronald. 1991. Paying for public education: New evidence on how and why money matters. Harvard Journal on Legislation 28(2): 465–498.Google Scholar Figlio, David N. 1997. Teacher salaries and teacher quality. Economics Letters 55: 267–271.CrossRefGoogle Scholar Figlio, David N. 2002. Can public schools buy better-qualified teachers? Industrial and Labor Relations Review 55: 686–699.CrossRefGoogle Scholar Figlio, David N. 2004. Funding and accountability: Some conceptual and technical issues in state aid reform. In Helping children left behind: State aid and the pursuit of educational equity, ed. John Yinger, 87–111. Cambridge, MA: MIT Press.Google Scholar Figlio, David N., and Kim Rueben. 2001. Tax limits and the qualifications of new teachers. Journal of Public Economics (April): 49–71.Google Scholar Greene, Jay P., and Julie R. Trivitt. 2008. Can judges improve academic achievement? Peabody Journal of Education 83(2): 224–237.CrossRefGoogle Scholar Guryan, Jonathan. 2001. Does money matter? Estimates from education finance reform in Massachusetts (NBER Working Paper 8269). Cambridge, MA: National Bureau of Economic Research. http://www.nber.org/papers/w8269. Hanushek, Eric A. 1986. Economics of schooling: Production and efficiency in public schools. Journal of Economic Literature 24(3): 1141–1177.Google Scholar Hanushek, Eric A. 1989. The impact of differential expenditures on school performance. Educational Researcher 18(4): 45–62.CrossRefGoogle Scholar Hanushek, Eric A. (ed.). 2006. Courting failure: How school finance lawsuits exploit judges' good intentions and harm our children. Palo Alto: Hoover Press.Google Scholar Hanushek, Eric A. 2011. The economic value of higher teacher quality. Economics of Education Review 30(3): 466–479.CrossRefGoogle Scholar Hanushek, Eric A. 2009. Teacher deselection. In Creating a new teaching profession, ed. Dan Goldhaber and Jane Hannaway, 168, 172–173. Washington, DC: Urban Institute Press.Google Scholar Hanushek, Eric A., and Alfred Lindseth. 2009. Schoolhouses, courthouses and statehouses. Princeton: Princeton University Press.Google Scholar Hyman, Joshua. 2013. Does money matter in the long run? Effects of school spending on educational attainment (Working Paper). Ann Arbor: University of Michigan. http://www-personal.umich.edu/~jmhyman/Hyman_JMP.pdf. Isenberg, E. P. 2010. The effect of class size on teacher attrition: Evidence from class size reduction policies in New York State. US Census Bureau Center for Economic Studies Paper CES-WP-10-05.Google Scholar Jackson, C. Kirabo, Rucker Johnson, and Claudia Persico. 2015. The effects of school spending on educational and economic outcomes: Evidence from school finance reforms (NBER Working Paper No. 20847). Cambridge, MA: National Bureau of Economic Research. http://www.nber.org/papers/w20847 Jepsen, Christopher, and Steven Rivkin. 2002. What is the tradeoff between smaller classes and teacher quality? (NBER Working Paper 9205). Cambridge, MA: National Bureau of Economic Research. http://www.nber.org/papers/w9205. Konstantopoulos, Spryros, and Geoffrey Borman. 2011. Family background and school effects on student achievement: A multilevel analysis of the Coleman data. Teachers College Record 113(1): 97–132.Google Scholar Loeb, Susanna, and Marianne E. Page. 2000. Examining the link between teacher wages and student outcomes: The importance of alternative labor market opportunities and non-pecuniary variation. Review of Economics and Statistics 82(3): 393–408.CrossRefGoogle Scholar Loeb, Susanna, Linda Darling-Hammond, and John Luczak. 2005. How teaching conditions predict teacher turnover in California schools. Peabody Journal of Education 80(3): 44–70.CrossRefGoogle Scholar Mishel, Lawrence, Sylvia A. Allegretto, and Sean P. Corcoran. 2011. The teaching penalty: An update through 2010 (EPI Issue Brief 298). Washington, DC: Economic Policy Institute. http://www.epi.org/publication/the_teaching_penalty_an_update_through_2010/. Murnane, Richard J., and Randall Olsen. 1989. The effects of salaries and opportunity costs on length of state in teaching. Evidence from Michigan. Review of Economics and Statistics 71(2): 347–352.CrossRefGoogle Scholar Neymotin, Florence. 2010. The relationship between school funding and student achievement in Kansas public schools. Journal of Education Finance 36(1): 88–108.CrossRefGoogle Scholar Nguyen-Hoang, Phuong, and John Yinger. 2014. Education finance reform, local behavior, and student performance in Massachusetts. Journal of Education Finance 39(4): 297–322.Google Scholar Ondrich, Jan, Emily Pas, and John Yinger. 2008. The determinants of teacher attrition in upstate New York. Public Finance Review 36(1): 112–144.CrossRefGoogle Scholar Papke, Leslie E. 2005. The effects of spending on test pass rates: Evidence from Michigan. Journal of Public Economics 89(5–6): 821–839.CrossRefGoogle Scholar Roy, Joydeep. 2011. Impact of school finance reform on resource equalization and academic performance: Evidence from Michigan. Education Finance and Policy 6(2): 137–167.CrossRefGoogle Scholar Sims, David. 2008. A strategic response to class size reduction: Combination classes and student achievement in California. Journal of Policy Analysis and Management 27(3): 457–478.CrossRefGoogle Scholar Sims, David. 2009. Crowding Peter to educate Paul: Lessons from a class size reduction externality. Economics of Education Review 28: 465–473.CrossRefGoogle Scholar Springer, Matthew G., Dale Ballou, Laura S. Hamilton, Vi-Nhuan Le, J.R. Lockwood, Daniel F. McCaffrey, Matthew Pepper, and Brian M. Stecher. 2011. Teacher pay for performance: Experimental evidence from the project on incentives in teaching (POINT). Evanston: Society for Research on Educational Effectiveness.Google Scholar U.S. Census. Public elementary–Secondary education finance data. http://www.census.gov/govs/school/ U.S. Department of Education. National Center for Education Statistics. Common Core of Data. http://nces.ed.gov/ccd/ccddata.asp U.S. Department of Education. National Center for Education Statistics. Schools and Staffing Survey. http://nces.ed.gov/surveys/sass/ U.S. Census. Small area income and poverty estimates, School District Data Files. http://www.census.gov/did/www/saipe/data/schools/data/index.html U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. 2003. Educational practices supported by rigorous evidence: A user friendly guide. Washington, DC. http://www2.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf Walberg, Herbert J. 2006. High poverty, high performance schools, districts and states, 79–102. In Hanushek 2006.Google Scholar © Educational Testing Service 2016 Open Access This chapter is distributed under the terms of the Creative Commons Attribution-Noncommercial 2.5 License (http://creativecommons.org/licenses/by-nc/2.5/) which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. The images or other third party material in this chapter are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material. 1.Graduate School of EducationRutgers UniversityNew BrunswickUSA 2.Education Law CenterNewarkUSA Baker B., Farrie D., Sciarra D.G. (2016) The Changing Distribution of Educational Opportunities: 1993–2012. In: Kirsch I., Braun H. (eds) The Dynamics of Opportunity in America. Springer, Cham First Online 10 March 2016 DOI https://doi.org/10.1007/978-3-319-25991-8_4 Publisher Name Springer, Cham eBook Packages Education
CommonCrawl
Tagged: determinant of a matrix Find All Eigenvalues and Corresponding Eigenvectors for the $3\times 3$ matrix Find all eigenvalues and corresponding eigenvectors for the matrix $A$ if 2 & -3 & 0 \\ Find All Values of $a$ which Will Guarantee that $A$ Has Eigenvalues 0, 3, and -3. Let $A$ be the matrix given by -2 & 0 & 1 \\ -5 & 3 & a \\ 4 & -2 & -1 \] for some variable $a$. Find all values of $a$ which will guarantee that $A$ has eigenvalues $0$, $3$, and $-3$. Given the Data of Eigenvalues, Determine if the Matrix is Invertible In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not. (a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$. (b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Express the Eigenvalues of a 2 by 2 Matrix in Terms of the Trace and Determinant Let $A=\begin{bmatrix} a & b\\ c& d \end{bmatrix}$ be an $2\times 2$ matrix. Express the eigenvalues of $A$ in terms of the trace and the determinant of $A$. Linear Transformation $T:\R^2 \to \R^2$ Given in Figure Let $T:\R^2\to \R^2$ be a linear transformation such that it maps the vectors $\mathbf{v}_1, \mathbf{v}_2$ as indicated in the figure below. Find the matrix representation $A$ of the linear transformation $T$. Is the Sum of a Nilpotent Matrix and an Invertible Matrix Invertible? A square matrix $A$ is called nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix. Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$. Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Linear Algebra Midterm 1 at the Ohio State University (2/3) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017. There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6. Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} -1 \\ \end{bmatrix}, \mathbf{b}=\begin{bmatrix} a \\ Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5. Find the inverse matrix of 0 &1 & 0 & 0 \\ \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Consider the system of linear equations 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. (Linear Algebra Midterm Exam 1, the Ohio State University) Let $I$ be the $2\times 2$ identity matrix. Then prove that $-I$ cannot be a commutator $[A, B]:=ABA^{-1}B^{-1}$ for any $2\times 2$ matrices $A$ and $B$ with determinant $1$. Using the Wronskian for Exponential Functions, Determine Whether the Set is Linearly Independent By calculating the Wronskian, determine whether the set of exponential functions \[\{e^x, e^{2x}, e^{3x}\}\] is linearly independent on the interval $[-1, 1]$. True or False: If $A, B$ are 2 by 2 Matrices such that $(AB)^2=O$, then $(BA)^2=O$ Let $A$ and $B$ be $2\times 2$ matrices such that $(AB)^2=O$, where $O$ is the $2\times 2$ zero matrix. Determine whether $(BA)^2$ must be $O$ as well. If so, prove it. If not, give a counter example. How to Prove a Matrix is Nonsingular in 10 Seconds Using the numbers appearing in \[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 Prove that the matrix $A$ is nonsingular. Eigenvalues of a Matrix and its Transpose are the Same Let $A$ be a square matrix. Prove that the eigenvalues of the transpose $A^{\trans}$ are the same as the eigenvalues of $A$. The Formula for the Inverse Matrix of $I+A$ for a $2\times 2$ Singular Matrix $A$ Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix. Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\] Using the formula, calculate the inverse matrix of $\begin{bmatrix} 2 & 1\\ 1& 2 The Product of Two Nonsingular Matrices is Nonsingular Prove that if $n\times n$ matrices $A$ and $B$ are nonsingular, then the product $AB$ is also a nonsingular matrix. The Determinant of a Skew-Symmetric Matrix is Zero Prove that the determinant of an $n\times n$ skew-symmetric matrix is zero if $n$ is odd. Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$. (b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue. Nilpotent Ideal and Surjective Module Homomorphisms Boolean Rings Do Not Have Nonzero Nilpotent Elements If a Finite Group Acts on a Set Freely and Transitively, then the Numbers of Elements are the Same Determine Eigenvalues, Eigenvectors, Diagonalizable From a Partial Information of a Matrix A Group is Abelian if and only if Squaring is a Group Homomorphism Diagonalize a 2 by 2 Matrix $A$ and Calculate the Power $A^{100}$ The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane
CommonCrawl
On the Denjoy rank, the Kechris-Woodin rank and the Zalcwasser rank Author: Haseo Ki MSC (1991): Primary 04A15, 26A21; Secondary 42A20 DOI: https://doi.org/10.1090/S0002-9947-97-01767-4 Abstract: We show that the Denjoy rank and the Zalcwasser rank are incomparable. We construct for any countable ordinal $\alpha$ differentiable functions $f$ and $g$ such that the Zalcwasser rank and the Kechris-Woodin rank of $f$ are $\alpha +1$ but the Denjoy rank of $f$ is 2 and the Denjoy rank and the Kechris-Woodin rank of $g$ are $\alpha +1$ but the Zalcwasser rank of $g$ is 1. We then derive a theorem that shows the surprising behavior of the Denjoy rank, the Kechris-Woodin rank and the Zalcwasser rank. M. Ajtai and A. S. Kechris, The set of continuous functions with everywhere convergent Fourier series, Trans. Amer. Math. Soc. 302 (1987), no. 1, 207–221. MR 887506, DOI https://doi.org/10.1090/S0002-9947-1987-0887506-4 Andrew M. Bruckner, Differentiation of real functions, Lecture Notes in Mathematics, vol. 659, Springer, Berlin, 1978. MR 507448 D. C. Gillespie and W. A. Hurwitz, On sequences of continuous functions having continuous limits, Trans. Amer. Math. Soc. 32 (1930), 527–543. Y. Katznelson, An introduction to harmonic analysis, 2nd ed., Dover, New York, 1976. Alexander S. Kechris, Classical descriptive set theory, Graduate Texts in Mathematics, vol. 156, Springer-Verlag, New York, 1995. MR 1321597 Haseo Ki, The Kechris-Woodin rank is finer than the Zalcwasser rank, Trans. Amer. Math. Soc. 347 (1995), no. 11, 4471–4484. MR 1321581, DOI https://doi.org/10.1090/S0002-9947-1995-1321581-2 Alexander S. Kechris and W. Hugh Woodin, Ranks of differentiable functions, Mathematika 33 (1986), no. 2, 252–278 (1987). MR 882498, DOI https://doi.org/10.1112/S0025579300011244 S. Mazurkiewicz, Über die Menge der differenzierbaren Funktionen, Fund. Math. 27 (1936), 244–249. Yiannis N. Moschovakis, Descriptive set theory, Studies in Logic and the Foundations of Mathematics, vol. 100, North-Holland Publishing Co., Amsterdam-New York, 1980. MR 561709 T. I. Ramsamujh, Three ordinal ranks for the set of differentiable functions, J. Math. Anal. Appl. 158 (1991), no. 2, 539–555. MR 1117581, DOI https://doi.org/10.1016/0022-247X%2891%2990255-X A. Zalcwasser, Sur une propriété du champs des fonctions continues, Studia Math. 2 (1930), 63–67. A. Zygmund, Trigonometric series. 2nd ed. Vols. I, II, Cambridge University Press, New York, 1959. MR 0107776 M. Ajtai and A. S. Kechris, The set of continuous functions with everywhere convergent Fourier series, Trans. Amer. Math. Soc. 302 (1987), 207–221. A. M. Bruckner, Differentiation of real functions, Lecture Notes in Math., vol. 659, Springer-Verlag, Berlin and New York, 1978. A. Kechris, Classical descriptive set theory, Springer Verlag, New York, 1995. H. Ki, The Kechris-Woodin rank is finer than the Zalcwasser rank, Trans. Amer. Math. Soc. 347 (1995), 4471–4484. A. S. Kechris and W. H. Woodin, Ranks for differentiable functions, Mathematika 33 (1986), 252–278. Y. N. Moschovakis, Descriptive set theory, North-Holland, Amsterdam, 1980. T. I. Ramsamujh, Three ordinal ranks for the set of differentiable functions, J. Math. Anal. and Appl. 158 (1991), 539–555. A. Zygmund, Trigonometric series, 2nd ed., Cambridge Univ. Press, 1959. Retrieve articles in Transactions of the American Mathematical Society with MSC (1991): 04A15, 26A21, 42A20 Retrieve articles in all journals with MSC (1991): 04A15, 26A21, 42A20 Haseo Ki Affiliation: Department of Mathematics, Yonsei University, Seoul, 120-749, Korea Email: [email protected] Keywords: Denjoy rank, descriptive set theory, Fourier series, Kechris-Woodin rank, Zalcwasser rank Received by editor(s): April 13, 1995 Received by editor(s) in revised form: January 18, 1996 Additional Notes: Partially supported by GARC-KOSEF
CommonCrawl
A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions Chaos control in a pendulum system with excitations March 2015, 20(2): 385-395. doi: 10.3934/dcdsb.2015.20.385 Asymptotic behavior for a reaction-diffusion population model with delay Keng Deng 1, and Yixiang Wu 2, Department of Mathematics, University of Louisiana at Lafayette, Lafayette, Louisiana 70504-1010 Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504, United States Received May 2014 Revised August 2014 Published January 2015 In this paper, we study a reaction-diffusion population model with time delay. We establish a comparison principle for coupled upper/lower solutions and prove the existence/uniqueness result for the model. We then show the global asymptotic behavior of the model. Keywords: existence-uniqueness, Population model with delay, comparison principle, asymptotic behavior.. Mathematics Subject Classification: 35A01, 35B40, 35K57, 92D2. Citation: Keng Deng, Yixiang Wu. Asymptotic behavior for a reaction-diffusion population model with delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 385-395. doi: 10.3934/dcdsb.2015.20.385 S. Ai, Traveling wave fronts for generalized Fisher equations with spatio-temporal delays,, J. Differential Equations, 232 (2007), 104. doi: 10.1016/j.jde.2006.08.015. Google Scholar M. V. Bartuccelli and S. A. Gourley, A Population model with time-dependent delay,, Math. Comput. Modelling, 26 (1997), 13. doi: 10.1016/S0895-7177(97)00237-9. Google Scholar N. F. Britton, Spatial structures and periodic travelling waves in an integro-differential reaction-diffusion population model,, SIAM J. Appl. Math., 50 (1990), 1663. doi: 10.1137/0150099. Google Scholar K. Deng, On a nonlocal reaction-diffusion population model,, Discrete Contin. Dyn. Syst. Ser. B, 9 (2008), 65. doi: 10.3934/dcdsb.2008.9.65. Google Scholar S. A. Gourley and N. F. Briton, On a modified Volterra population equation with diffusion,, Nonlinear Anal., 21 (1993), 389. doi: 10.1016/0362-546X(93)90082-4. Google Scholar Y. Kyrychko, S. A. Gourley and M. V. Bartuccelli, Comparison and convergence to equlibrium in a nonlocal delayed reaction-diffusion model on an infinite domain,, Discrete Contin. Dyn. Syst. Ser. B, 5 (2005), 1015. doi: 10.3934/dcdsb.2005.5.1015. Google Scholar R. Laister, Global asymptotic behavior in some functional parabolic equations,, Nonlinear Anal., 50 (2002), 347. doi: 10.1016/S0362-546X(01)00766-0. Google Scholar C. Ou and J. Wu, Traveling wavefronts in a delayed food-limited population model,, SIAM J. Math. Anal., 39 (2007), 103. doi: 10.1137/050638011. Google Scholar M. Protter and H. Weinberger, Maximum Priciples in Differential Equations,, Prentice-Hall Inc, (1967). Google Scholar R. Redlinger, Existence theorems for semilinear parabolic systems with functionals,, Nonlinear Anal., 8 (1984), 667. doi: 10.1016/0362-546X(84)90011-7. Google Scholar R. Redlinger, On Volterra's population equation with diffusion,, SIAM J. Math. Anal., 16 (1985), 135. doi: 10.1137/0516008. Google Scholar S. Ruan and D. Xiao, Stability of steady states and existence of travelling waves in a vector-disease model,, Proc. Royal Soc. Edinburgh. Ser. A, 134 (2004), 991. doi: 10.1017/S0308210500003590. Google Scholar A. Schiaffino, On a diffusion Volterra equation,, Nonlinear Anal., 3 (1979), 595. doi: 10.1016/0362-546X(79)90088-9. Google Scholar Z-C. Wang, W-T. Li and S. Ruan, Existence and stability of traveling wave fronts in reaction advection diffusion equations with nonlocal delay,, J. Differential Equations, 238 (2007), 153. doi: 10.1016/j.jde.2007.03.025. Google Scholar J. Wu and X-Q. Zhao, Permanence and convergence in multi-species competition systems with delay,, Trans. Amer. Math. Soc., 126 (1998), 1709. doi: 10.1090/S0002-9939-98-04522-5. Google Scholar Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042 Thazin Aye, Guanyu Shang, Ying Su. On a stage-structured population model in discrete periodic habitat: III. unimodal growth and delay effect. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021005 Mengting Fang, Yuanshi Wang, Mingshu Chen, Donald L. DeAngelis. Asymptotic population abundance of a two-patch system with asymmetric diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3411-3425. doi: 10.3934/dcds.2020031 Laurent Di Menza, Virginie Joanne-Fabre. An age group model for the study of a population of trees. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020464 Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320 Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310 Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020032 John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044 Biao Zeng. Existence results for fractional impulsive delay feedback control systems with Caputo fractional derivatives. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021001 Guihong Fan, Gail S. K. Wolkowicz. Chaotic dynamics in a simple predator-prey model with discrete delay. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 191-216. doi: 10.3934/dcdsb.2020263 Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 Keng Deng Yixiang Wu
CommonCrawl
Weaponry made from extreme light-weight steel: swords and daggers Imagine there was a material that was mostly identical to steel in almost every way, but instead of a density of $7.85–7.87 g/cm^3$ it would be a tenth of that. That is about 4 times lighter than aluminium is. The idea is that all other properties such as hardness/elasticity brittleness potential sharpness are identical to those of the steel that was used for medieval weaponry anyway. Chemical properties, as well as the forging process, are to be ignored as it is a fictional material. Would using this material to craft swords and daggers be advantageous compared to creating them with traditional materials? I am fairly certain that longswords would be pointless, as they partially rely on their mass to have an impact, but short swords, or even daggers, might be extremely useful as they would be very easy to use and hide. I am guessing the force of the hand/arm that drives the dagger into someone would suffice to tear the flesh. Answers should elaborate on whether adaptions or changes to traditional weapon designs would be required to make the weapon profit from the unique material properties and offer advantages over the traditional weapons. This question is part of a series regarding weapon and armour design using fictional materials with unique properties This is the first question of the series. So there are no links to others yet. science-based weapons medieval materials ArtificialSoulArtificialSoul $\begingroup$ Such metal will be great for armour: plates and chain mail and with so well armoured soldiers daggers will not be effective $\endgroup$ – jean Jul 4 '18 at 20:34 $\begingroup$ @jean : In fact, at three-quarters the density of water, such armor would be a flotation device... $\endgroup$ – Eric Towers Jul 4 '18 at 21:01 $\begingroup$ hollow out, fill with lead. $\endgroup$ – Ewan Jul 4 '18 at 23:02 $\begingroup$ Aside from the effectiveness of weapons made from this material - I think early manpowered flight would happen with this material, eg gliders etc - leading to a different type of combat. $\endgroup$ – SeanR Jul 5 '18 at 9:05 $\begingroup$ @ArtificialSoul Any longsword is a bad choice against someone in armour, swords were not anti-armour weapons. When forced to be used as such, they were halfsworded and the user tried to manoeuvre the point into a gap. (Which would of course be easier with a lighter sword.) $\endgroup$ – Grimm The Opiner Jul 6 '18 at 12:33 I wouldn't be so quick to dismiss its usefulness for long swords. In weaponized combat, the use-case and fighting style will vary with each weapon's size, shape, maneuverability, weight etc. With the materials of choice for weapons being steel over the last couple thousand years, we developed combat style for short-and-maneuverable weapons as well as big-and-heavy weapons, but don't have many big-and-light weapons because materials sturdy enough to make weapons simply weigh too much. In terms of fighting styles that focus on speed, such as knife fighting (a common example of weapon for this is a dagger) the fighter has a sturdy and maneuverable weapon but must sacrifice reach. Longer swords, on the other hand, sacrifice speed in order to gain reach and the ability to deliver stronger blows (both because of their weight and their length) and granting them the ability to slash through target. Some sword designs have tried to combine the two, the main one which comes to mind being the rapiers which are mainly intended as long-yet-agile thrust weapons, but they somewhat retain the ability to slash. The existence of rapiers is proof enough for me that there is a good use-case for long, agile, thrusting swords; for such swords a metal such as described would certainly be very useful. The weight (and therefore agility) would be similar to that of the modern-day foil used for competitive fencing. Modern-day foils are very agile but lack the strength to be useful weapons (granted, that's by design, but you would not be able to make something strong enough at the same weight). A weapon as agile as the foil and as solid as the rapier would certainly be of interest. As for long swords (such as the claymore) we've never seen a lighter equivalent, but that speaks more to the lack of viable materials for such a weapon than a lack of usefulness for it. Some people would certainly experiment with it, and I'm sure that we would see new combat styles develop for it. Just because they wouldn't work for current combat styles doesn't mean that no combat style could be developed for them. It could fill the gap between swords and pole arms. Personally I think that small knives (blade <10 inches) would be the least beneficent of this new material since they are already quite maneuverable, limited mostly by their shape and size than their weight. Short swords and rapiers which are already "hybrids" between quick and long would improve by large margins. Lastly, I'll discuss the required differences construction that would be required for knives. The balance of a good knife is somewhere around the guard, the point at which the handle stops and the blade starts. If you were to drastically lighten the blade material (to 0.78 g/cm^3, a tenth that of steel, nearly identical to that of oak) and keep the handle materials the same, you would end up with a knife that is very butt-heavy. You would need to thin out the handle or use lighter materials on it, or both. Alexandre AubreyAlexandre Aubrey $\begingroup$ Wish I could upvote this again. Most people assume medieval sword combat was just hacking at each other's armour until one died. HEMA and Talhoffer style fighting tell a contrasting tale! $\endgroup$ – Korthalion Jul 5 '18 at 8:26 $\begingroup$ @Korthalion yes, this answer needs more upvotes. A nice and elaborated answer. $\endgroup$ – ArtificialSoul Jul 5 '18 at 8:47 $\begingroup$ Note that while a claymore might be a long sword, it is not a longsword. $\endgroup$ – Grimm The Opiner Jul 6 '18 at 12:28 $\begingroup$ @Alexandre Aubrey I am not criticizing your answer which is obviously fantastic. I was just pointing out one fact which is often misunderstood because of popular media. For example movies like to give rapiers to women because they are "lighter", but we know in reality that they are not so this is a popular misconception. I was just highlighting this fact for what it is worth. Possibly not a lot.. $\endgroup$ – Tyler S. Loeper Jul 6 '18 at 15:58 $\begingroup$ Butt-heavy blade weapon, weightless blade... sounds like a lightsaber - er, a light saber? $\endgroup$ – Mathieu Guindon Jul 6 '18 at 17:50 Such a metal would be good for thrusting weapons like a daggers, shivs, and rapiers; mediocre for slashing weapons like cutlasses, scimitars, katana, etc. that rely more on their weight but still use speed and a cutting edge for most of the damage; but poor for a throwing knife (poor balance, affected by wind) or any edged blunt-force weapon (like a long sword, most swords fall into this category). It would be worthless for maces, flails, or any other non-edged but metal blunt-force weapons. On the other hand, such a metal would be wonderful for bayonets, arrow tips, spear tips, or any other application where the blade is connected to something else that represents the bulk of the mass. It would make balancing such an object much simpler and would overcome the problem of breaking under the leverage applied by the item the blade was mounted to. Ultimately, the metal would be fabulous for door bracing, carriages, or any other construction where metal is used to support the construction but the weight is an issue. The guys using wheelbarrows and building skyscrapers will erect a statue in your honor. JBHJBH $\begingroup$ Comments are not for extended discussion; this conversation about objections to this answer has been moved to chat. $\endgroup$ – Monica Cellio Jul 6 '18 at 1:28 $\begingroup$ Please note, that swords were not "blunt-force weapons". Contrary to popular myths, swords were very nimble weapons, their center of mass is very close to the handle, so they would be very bad at bashing even unarmored opponents, let alone armored ones. Swords are for cutting and stabbing, not for chopping like with an axe. Here is a useful video demonstrating the difference between a sword and a mace: youtu.be/fg-GhnAO1VE?t=423. No sword could cut through metal armor, especially full plate, so there was no point in making swords heavy and dull to bash armor, it would be useless. $\endgroup$ – vsz Jul 6 '18 at 19:09 As others have said, your ultralight steel will be best for thrusting weapons, but I think there are more uses people are forgetting. You could have polearms of extraordinary strength. Normal polearms have hafts made of wood, but yours could have hafts at least partially made of this new steel. This would let you swing longer warhammers or halberds than usual. If the new steel can be combined with traditional steel, it would be useful for swords as well. You could have a sword with traditional steel in/near the hilt and new, light steel towards the tip. This would let you have a much longer sword without sacrificing its balance. One big disadvantage however, is that this new steel will have less inertia. When you parry with a traditional steel sword, a lot of the energy of your foe's swing will be wasted pushing your sword. With lighter steel, your sword will waste less of your foe's energy, and you will feel the impact a lot more in your hands, wrists, and arms. You will thus be at more risk of having your sword knocked out of your hands. You have similar problems with armor made of light steel. Being struck by a blunt weapon while wearing armor made of light steel will be more dangerous than in armor made of traditional steel because the light steel has less inertia. It absorbs less of the force, and so more of it goes into hurting you. Ryan_LRyan_L $\begingroup$ I like the combination idea of regular steel and light-weight steel. It wouldn't make a significant difference in regular wielding, but increasing reach is always a strategic advantage. $\endgroup$ – ArtificialSoul Jul 4 '18 at 17:03 $\begingroup$ Can confirm, having wielded a full-weight steel bill on the battlefield multiple times, having a lighter head would make you at least twice as deadly! $\endgroup$ – Korthalion Jul 5 '18 at 8:21 Most of your weaponry would be near useless. Such light weight steel would enable MUCH thicker armor with out the massive (heh!) weight penalty. So now your weapons - which will depend on speed and sharpness for damage vs being a slightly sharp club - will have an even harder time doing damage through a double layer of chain mail, or plate, etc. I would predict that many actual weapons will still be made from traditional steel - they need the weight. Armor, wagon wheels, ships, etc could be made from the new light weight stuff.... ivanivanivanivan $\begingroup$ Your weapons will also be much much longer, and so their tips will be moving much faster, meaning they will hit much harder. I suspect this will cancel out the added armor thickness, leaving the balance between armor and weaponry unchanged. $\endgroup$ – Ryan_L Jul 5 '18 at 4:15 $\begingroup$ Good point about the armor but as far as I'm aware it's always been pretty hard to stab straight through plate mail. Blunt weapons and weapons like daggers (to hit small vulnerable spots) were mostly used. $\endgroup$ – Michael Jul 5 '18 at 5:20 Steel has a density of about $8,000kg/m^3$, which meas ubersteel has a density of about 800kg/m3 which makes it heavier than wood unless somebody is making magically dense wood (which might make a good competitor to metal) woods. (Thank you Alexandre Aubrey for pointing me to the wood-density table; thanks to Loong for pointing out a math error.) This could be used to strengthen a traditional quarterstaff. The advantage of ubersteel over wood is the strength it would bring. The disadvantage is that it would probably be much more expensive to have one made in metal than having one made in wood with only some iron added to the ends which makes it stronger where it hits things and also adds mass where it needs it the most. I think the increase in strength and durability is more important than the mass, but I've never had the opportunity to use a iron-shod staff. A lot of uses for it depend on the cost. If it's expensive, it will not be used in melee weapons at all. However, think about it's uses in siege weaponry. Even if it were to only partially replace or reinforce the wooden parts, a catapult (and like devices such as the trebuchet) could be made to be easier to move which means faster aiming, stronger, which means more distance or higher payloads, and if done right, faster to construct. It would also be used in buildings and vehicles, even if those vehicles are just carts. Shipwrights would love this stuff! To be able to have steel-like strength without the problems. I don't now if you are considering canons, but being made of this material would really please your admirals and even cavalry. Lightweight artillery is very important in ships and when you have to haul your cannon around with horses. Being lighter weight, they would be more easily taken through muddy fields and other problematic terrain. Remember that most weapons spend more time being transported than being fired. I'm not sure much would be left for the common man. To be honest, the best use in weapons are stabbing weapons like a dagger, arrowheads (and yes, arrows can fly perfectly well with a light head), and spearheads. In the medieval times, there was very little metal that we would call steel. The technology of the times used bloomery furnaces which required a lot of work to produce steel good enough for a sword. This is one of the reasons that most people didn't use swords, despite all the books and movies to the contrary. NomadMakerNomadMaker $\begingroup$ Why do you think that a light quarterstaff is better than a heavy? $\endgroup$ – pipe Jul 5 '18 at 8:09 $\begingroup$ @pipe i think the idea is to have a steel quarterstaff instead of a wooden one. you could combine regular steel and light-weight steel to achieve the same weight and balance, but it would be significantly sturdier $\endgroup$ – ArtificialSoul Jul 5 '18 at 8:33 $\begingroup$ Yes. I use an aluminum quarterstaff to practice by myself. It's heavier than any wooden staff I have. As for weight, a quarterstaff is mostly a weapon that works by speed, with some damage depending on mass. $\endgroup$ – NomadMaker Jul 5 '18 at 14:48 $\begingroup$ If nothing else, you could cap both ends of the staff with the ubersteel, making them do more damage but without too much of a weight increase $\endgroup$ – Korthalion Jul 6 '18 at 10:52 $\begingroup$ Good answer. As for the first sentence (of this new material being lighter than wood): density of 0.78g/cm^3 is about the same as some of the denser woods, but denser than most. That doesn't invalidate anything you said, it makes some of them even more relevant if you compare it to the woods used for the construction of some of the items you mention. I'd recommend looking at this list to compare densities of different woods (and maybe link it in your answer to give more credibility to the claim) $\endgroup$ – Alexandre Aubrey Jul 6 '18 at 19:55 I'd say that you're probably right that daggers and shortswords would benefit from this material whereas axes, warhammers, maces and arrows would probably be better with heavier steel as they rely at least in part on the mass behind them. Rapiers and other thrusting swords (like late medieval swords) would probably benefit, whereas hacking swords like scandinavian/viking swords and falcatas would probably be better in steel. Draw-cut swords might be better in a lighter material too as they rely less on weight to strike. Spears would also benefit (if not thrown) because you could get better balance without the heavy spearhead. Or rather, you'd wouldn't need as heavy a counterweight on the back of the spear to balance the head, allowing you to move it around faster (lower polar moment of inertia). Longswords would be an interesting case. Yes they rely somewhat on mass, but with a much lighter material you could probably make a verylongsword quite wieldy which is able to outreach an opponent, with techniques using more thrusts and draw-cuts. Not sure how it would affect parries and winding though. Poleaxes would be interesting too, or any weapon that relies on leverage to improve its strike (like falxes). I'm not sure how that would play out. Yes you have less mass to accelerate, but as anyone who's seen a quarterstaff in action knows it's perfectly possible to get a nasty amount of force behind a strike from the haft alone. $\begingroup$ The spears, longswords and poleaxes you proposed are pretty interesting. I didn't think about using the light-weight material to increase the weapon size to still have roughly the same moment of inertia properties and thus increasing reach by a long shot. $\endgroup$ – ArtificialSoul Jul 4 '18 at 16:58 $\begingroup$ Wikipedia says a Claymore weighs 2.2–2.8 kg (4.9–6.2 lb) with a total length of 120–140 cm (47–55 in). I suspect a 1/4 weight version would be used more like a rapier simply because that's where the weight and length end up $\endgroup$ – JollyJoker Jul 5 '18 at 7:15 $\begingroup$ @JollyJoker Agreed. There's another weapon I was thinking it might end up like, which is sort of like a longsword, but not bladed and triangular in cross section with a very fine point. Damned if I can remember the name of it though. Knights used it as a cross between a longsword and a lance. That'd work well too and probably be close to what we're looking at. $\endgroup$ – Ynneadwraith Jul 5 '18 at 8:20 $\begingroup$ @JollyJoker It's an estoc that I was after! $\endgroup$ – Ynneadwraith Jul 5 '18 at 8:46 $\begingroup$ @Ynneadwraith Assuming cost is not a problem, thrusting weapons like the estoc made completely from metal would presumably be superior to spears and such with wooden shafts. $\endgroup$ – JollyJoker Jul 5 '18 at 10:20 You're right that it's actually a disadvantage to have slashing swords or crushing weapons like maces be light weight, they do less damage because they have less energy on impact. However for stabbing weapons like stiletto or misericorde daggers or even for long swords like the rapier light weight but physically tough and/or springy material is a serious advantage since the weapon can be moved faster and control more easily. I would expect that there would be problems with this material for weapons not because of the weapons but because Armour is now 10 times as effective as it was at a given weight, weight has always been the limiting factor for metal armours. AshAsh I think one thing people are not thinking about with regard to cutting weapons is that the sword could just be made thicker. Also using F=MA and P=MV we can see that as long as the user is swinging his sword faster he can make up for a lot of the weight difference. If this super steel is 1/10 as dense then presumably it has 1/10 the contents therefore there would be 10 times the mass of this new steel compared to regular steel. This means we would probably see much wider more extremely tapered swords to add the mass. However swords would almost definitely not be used in this universe as the second armour was invented it would require 1/10 the amount of ores to make the steel and so using 1 mine that could mine 10 suits before you would now get 100 suits. A huge draw back of mail and other armours was the difficulty to make however more of the population would be professions who used steel as countries would have far fewer men working in the steel mines. As someone else mentioned 100% steel spears would be a thing. P.LordP.Lord In pure attack, I'd argue that nearly all swords rely more on sharpness and speed for damage than mass. That is why the point-of-balance is close to the hand compared to impact weapons like axes and maces. (Not that we shouldn't over-emphasize the importance of point-of-balance, but I believe it is relevant here.) This is also why, against heavy armor, the "murder stroke" was sometimes employed, where the wielder grabbed the sword by the blade and used it more like a warhammer than like a sword. But they rely on mass in the bind. This is not only when blocking or deflecting enemy strikes but also when controlling your opponent's weapon during an attack. This is why rapiers generally weren't much lighter than longswords. Opponents with otherwise identical weapons but one being conventional steel and the other being your lighter density steel, I believe the conventional steel wielder would have a significant advantage in controlling their opponent's sword. The wielder of the low density sword would need to exploit quicker disengages, but I'm not sure doing so would make up for the disadvantage. I am less sure on how this low-density steel could be exploited in weapon design. Robert FisherRobert Fisher I'm going to disagree with the idea that this would be a bad material for a sword, with the prevalent idea that it reduces the force of the swing. The force of a slash is determined by more than just the weight of the weapon, it's also determined by the speed. In fact, it's determined more by speed than by weight. Newton's second law of motion states that F = ma, also written as $F = \frac{1}{2}mv².$ https://en.wikipedia.org/wiki/Force#Second_law https://en.wikipedia.org/wiki/Kinetic_energy Since we are not changing gravitational constants, anything we do to the weight will have the same factor of change to it's mass. If we reduce the mass of a sword by a factor of ten, and keep speed or acceleration the same, then yes, the force is also reduced by a factor of ten. However, a human using a much lighter blade is going to be able to move that sword much faster than the heavier one. If we make an oversimplified assumption that if we reduce the weight by 10x, then we can increase the speed it is moving by 10x. Being generic, we now have the formula of $F = (1/2)(m/10)(10*v)².$ Doing simple algebra, we can reduce this: $F = (1/2)(m/10)100v² $ $F = (1/2)m(10)v² $ $F = 5mv² $ So, this says we've increased the force of the blade by 10x by reducing the weight 10x. If we only triple the speed at which we swing the sword, we can see that we have almost the same force of impact: $F = (1/2)(m/10)(3*v)² $ $F = (1/2)(m/10)9v² $ $F = (1/2)(9/10)mv² $ $F = (9/(2*10))mv² $ $F = (9/20)mv² $ $F = 0.45mv² $ Now consider that if a soldier trains with a regular steel sword, then uses the lighter version in battle, they will have just as much muscle mass as the soldiers with the heavier sword. The lighter sword will be so much easier to swing that they will get a much faster slash than with a heavier sword. This adds the ability to change the angle of attack faster as well as simply attacking faster, and attacking for longer stretches of time. A lighter sword to carry over their forced marches would make them less exhausted or able to travel farther. I carried an M16A2 during marches in Army Basic Training, and at 8.8 lbs, it gets really heavy quite quickly. JollyJoker, in a comment on another answer, says that a claymore weighs around 5.5 lbs. A katana weighs around 2.5lbs, even made from steel. I think you would find fighting styles closer to a katana with the 10x lighter swords. You might also find fighting styles closer to kung-fu, which also uses lighter swords. https://en.wikipedia.org/wiki/Katana Another answer suggests fighting styles along the lines of an epee/foil/rapier, but if the enemy is using armor, that's not likely to work. You would have to find the joints of the armor to attack, and that's pretty hard when the armor is constantly moving. Unless the soldier is an expert fencer, they would basically have to be lucky or wear their opponent down in order to dispatch them. A TV show I saw a while ago (BBC or a Nat. Geo.), showed that a katana is actually better at attacking armor than a traditional English sword. They did actual tests with actual armor and swords. The long sword put a decent dent in the armor, but the katana cut into it. Unfortunately, the person doing the tests wasn't really trained in either style of sword, and I don't think the swords they used were of good quality, but you get the idea. (A trained samurai would aim to use a spot about 1/3rd to 1/4 of the length of the blade from the tip, not the center of the blade, like in the video.) https://www.youtube.com/watch?v=EDkoj932YFo ArtificialSoul computercarguycomputercarguy $\begingroup$ This answer appears to assume a human arm can actually swing anything 3-10x as fast as a traditional sword, which may not actually be the case. I'd like to see some source that shows a human arm can actually move that fast (because I'm pretty sure a person's arm can go maybe twice as fast tops with no load compared to swinging a standard sword). That's where the idea that you lose power with a lighter blade comes from - you can mathematically regain that force by increasing speed to compensate... but you can't physically gain that speed. $\endgroup$ – Delioth Jul 5 '18 at 21:42 $\begingroup$ @Delioth i agree. There are certainly martial arts that rely on extremely fast punches (e.g. Wing Tsun Kung Fu) but those punches are not comparable to the moves you make while swinging a sword. And they are not even 10 times as fast as a regular swing. $\endgroup$ – ArtificialSoul Jul 6 '18 at 6:21 $\begingroup$ "$F=ma$" here $F$ is a force. But then you write "$F = \frac{1}{2}mv²$", where the right side is kinetic energy, so why are you reusing $F$ which is normally used to denote forces and not $E$ which is normally used for energy? This inconsistent use of variables makes your post pretty confusing. $\endgroup$ – CodesInChaos Jul 6 '18 at 9:16 $\begingroup$ Doing a little bit of research, some sources give me the record swinging a baseball bat (which is just under 2 pounds) at around 80 mph. Fastest sword swings clock at around 43 mph, though that may be at a different point on the lever compared to a bat. Either way, the fastest punch (and thus the fastest your arm can physically move) ever recorded was at 45 miles per hour... so we aren't getting anything close to triple the speed, even if our sword is perfectly weightless. @ArtificialSoul these blades are going to be less effective than a normal steel sword (at least for striking power) $\endgroup$ – Delioth Jul 6 '18 at 14:56 $\begingroup$ Well... a punch is a straight line. The bat swing was measured at roughly the tip of the bat as far as I could tell. It's to get some context - if you can swing a 2 pound baseball bat at 80 mph, that's really close to the fastest you can swing any object regardless of weight. In any case, outside of the largest blades you aren't going to be able to move your arm fast enough to get extra energy into the strike - it'll be going a little faster but the reduced mass means it won't have nearly the same amount of energy. $\endgroup$ – Delioth Jul 6 '18 at 15:51 Everybody seems to be missing the biggest weapon -- commerce! (And logistics). This material is going to be an incredibly useful building material for both ground and water based travel (and when the time comes air). The size of ships, boats and wheels this will make possible...a ship made out of wood has a maximium length of about 300 feet one made out of steel thousands of feet. For internal roads, imagine the loads you could haul with 10 foot diameter wheels! Armies are going to be larger and faster than in or world, able to cover more and more varied ground. As for its use as pointed weapon, its going to be so effective as armor, that making swords out of it would be a waste of time. jmorenojmoreno $\begingroup$ Your answer is indeed correct, but it does not answer the question. The question specifically states application in swords and daggers (mainly to avoid being too broad) and your answer does not even mention that. It is not a bad answer, so i won't downvote it, but please make remarks like this in the comments to the question in the future. (Some people already did, by the way.) $\endgroup$ – ArtificialSoul Jul 8 '18 at 13:05 For what it's worth this is not necessarily a fictional material. I obtained a section of a helicopter blade once (my sister was a PA for a exec at Hughes) The interior of the airfoil was filled with foamed aluminum. No idea how they made it. A multiphase material might well be as strong as steel and a fraction of the weight. Look at graphite composite bike frames. Far stiffer than steel, but at 1/10 the weight. I can buy a kevlar canoe that is 1/4 the weight of a traditional oak rib, cedar plank, painted canvas canoe and it will take a lot more punishment. (But not as pretty...) I saw a 3d printed copper cube. Overall structure was tetrahedral internally. Despite being made of pure copper, it was a heat insulator better than most common insulators. -- copper only took up 3% of the volume, and the nature of the connections was such that the path length was several cm per cm thickness. It was still strong enough to stand on. Consider a sword made with a foamed metal core for stiffness, and a tungsten carbide skin for carving the other guy's armour. Or a sapphire lattice sword, with the lattice arranged so that it would have a certain spring. Put a directional anti-reflection coating on it so that it is visible from the hilt end, but hard to see from the pointed end Sherwood BotsfordSherwood Botsford $\begingroup$ While the end result is indeed similar and the applications correct, this is not what the question is about. It is about how to modify swords using such a material - and it also states it is a medieval scenario. I am guessing real foamed metal is not something a blacksmith could craft - hence the need for a fictional material. $\endgroup$ – ArtificialSoul Jul 12 '18 at 10:20 $\begingroup$ Fictional material, or fictional process take your pick. $\endgroup$ – Sherwood Botsford Jul 13 '18 at 19:05 $\begingroup$ good point, but i think i'd prefer the fictional material. $\endgroup$ – ArtificialSoul Jul 13 '18 at 19:13 Not the answer you're looking for? Browse other questions tagged science-based weapons medieval materials or ask your own question. Weaponry made from brittle steel: splintering projectiles Heavy, immutable steel: armour design Lighter swords with huge mass Is it possible to create a practical sword that's naturally toxic?
CommonCrawl
Sloshing frequencies in a half-space by Kelvin inversion. B. Andreas Troesch More by B. Andreas Troesch PDF File (1139 KB) DjVu File (274 KB) Primary: 76.49 Troesch, B. Andreas. Sloshing frequencies in a half-space by Kelvin inversion. Pacific J. Math. 47 (1973), no. 2, 539--552. https://projecteuclid.org/euclid.pjm/1102945888 [1] H. N. Abramson, The Dynamic Behavior of Liquids in Moving Containers, NASA SP-106, National Aeronautics and Space Administration, Washington, 1966. [2] B. Budiansky, Sloshing of liquidsin circular canals and spherical tanks, J. Aero/Space Sci.,27 (1960), 161-173. Mathematical Reviews (MathSciNet): MR22:434 [3] K.O.Friedrichs and H. Lewy, Thedock problem, Comm. Pure Appl. Math., 1 (1948), 135-148. Mathematical Reviews (MathSciNet): MR10:336e [4] P. Henrici, B. A. Troeseh, andL. Wuytaek, Sloshing frequencies for a half-space with circular or strip-like aperture, Z. Angew. Math. Phys., 21, (1970), 285-318. [5] E. Jahnke andF. Emde, Tables of Functions,Fourth Edition, Dover Publications, New York, 1945. Mathematical Reviews (MathSciNet): MR7:485b [6] N.N.Moiseev, Introductionto the theory of oscillations of liquid-containing bodies, Advances in Applied Mechanics, 8 (1964), 233-289. [7] N.N.Moiseev andA. A. Petrov, The calculation of free oscillations of a liquid in a motionless container, Advances in Applied Mechanics, 9 (1966), 91-154. [8] H. Rubin, The dock of finite extent, Comm. Pure Appl. Math., 7 (1954), 317-344. Mathematical Reviews (MathSciNet): MR16:82d [9] A. Sommerfeld, Partial DifferentialEquations in Physics, Academic Press, New York and London, 1964. 10- J. J. Stoker, Water Waves, Interscience Publishers, Inc., New York, 1957. Mathematical Reviews (MathSciNet): MR10:608b [11] B. A. Troesch, An isoperimetric sloshing problem, Comm. Pure Appl. Math., 18 (1965), 319-338. [12] B. A. Troesch and H. R. Troesch, A remark on the sloshing frequencies in a half- space, Z. Angew. Math. Phys., 23 (1972), 703-711. Some nonexistence results for positive solutions of elliptic equations in unbounded domains Damascelli, Lucio and Gladiali, Francesca, Revista Matemática Iberoamericana, 2004 Complex analysis of elastic symbols and construction of plane wave solutions in the half-space KAWASHITA, Mishio, RALSTON, James, and SOGA, Hideo, Journal of the Mathematical Society of Japan, 2003 On decay properties of solutions to the Stokes equations with surface tension and gravity in the half space SAITO, Hirokazu and SHIBATA, Yoshihiro, Journal of the Mathematical Society of Japan, 2016 The Exact Photonic Band Structure for a Class of Media with Periodic Complex Moduli Milton, Graeme W., Methods and Applications of Analysis, 2004 On the existence of positive solutions for a class of singular elliptic equations Conti, Monica, Crotti, Stefano, and Pardo, David, Advances in Differential Equations, 1998 Green's Functions for Heat Conduction for Unbounded and Bounded Rectangular Spaces: Time and Frequency Domain Solutions Simões, Inês, Tadeu, António, and Simões, Nuno, Journal of Applied Mathematics, 2015 Automorphic forms on the $5$-dimensional complex ball with respect to the Picard modular group over $\mathbb Z[i]$ MATSUMOTO, K., MINOWA, T., and NISHIMURA, R., Hokkaido Mathematical Journal, 2007 Lower Estimates for Certain Harmonic Functions in the Half Space Xu, Gang and Zhou, Xiaoyu, Abstract and Applied Analysis, 2014 Well-posedness of the Stokes–Coriolis system in the half-space over a rough surface Dalibard, Anne-Laure and Prange, Christophe, Analysis & PDE, 2014 Möbius deconvolution on the hyperbolic plane with application to impedance density estimation Huckemann, Stephan F., Kim, Peter T., Koo, Ja-Yong, and Munk, Axel, The Annals of Statistics, 2010
CommonCrawl
Recent advances in unveiling active sites in molybdenum sulfide-based electrocatalysts for the hydrogen evolution reaction Bora Seo1 & Sang Hoon Joo ORCID: orcid.org/0000-0002-8941-96621,2 Hydrogen has received significant attention as a promising future energy carrier due to its high energy density and environmentally friendly nature. In particular, the electrocatalytic generation of hydrogen fuel is highly desirable to replace current fossil fuel-dependent hydrogen production methods. However, to achieve widespread implementation of electrocatalytic hydrogen production technology, the development of highly active and durable electrocatalysts based on Earth-abundant elements is of prime importance. In this context, nanostructured molybdenum sulfides (MoS x ) have received a great deal of attention as promising alternatives to precious metal-based catalysts. In this focus review, we summarize recent efforts towards identification of the active sites in MoS x -based electrocatalysts for the hydrogen evolution reaction (HER). We also discuss recent synthetic strategies for the engineering of catalyst structures to achieve high active site densities. Finally, we suggest ongoing and future research challenges in the design of advanced MoS x -based HER electrocatalysts. Hydrogen is a sustainable and renewable energy carrier that has demonstrated potential as an alternative to fossil fuel energy sources [1, 2]. Currently, hydrogen is produced mainly by steam methane reforming and coal gasification, leading to the ensuing problem of CO2 release [3, 4]. To provide a more environmentally friendly route to hydrogen power, the development of clean hydrogen production technology is required. Electrocatalytic water splitting represents the most promising solution for producing hydrogen using a carbon-free system [5, 6]. However, the dominant use of Pt-based catalysts for the hydrogen evolution reaction (HER) hinders the widespread implementation of electrocatalytic hydrogen production systems, due to their high costs and limited abundance. Hence, there have been significant efforts to replace Pt-based catalysts with highly active, durable, and non-precious electrocatalysts for the HER [7,8,9,10,11,12,13,14,15,16,17,18,19]. Notable examples of non-precious metal catalysts include metal sulfides, metal phosphides, metal carbides, and heteroatom-doped carbons. Among the various classes of non-precious metal-based electrocatalysts [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82], nanostructured molybdenum sulfides (MoS x , x = 2–3) have been most widely studied, owing to their high activities, excellent stabilities, and precious metal-free compositions [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68]. Although bulk MoS2 exhibits negligible catalytic activity for the HER [83], pioneering theoretical and experimental works by Nørskov and Chorkendorff have demonstrated that nanostructured MoS x catalysts are able to catalyze the HER with high efficiency [20, 21]. During the last decade, significant progress has been made in designing MoS x catalysts at the nanoscale, which has resulted in advanced MoS x -based catalysts with enhanced HER performances. In this review, we highlight the key findings reported to date regarding identification of the active sites of MoS x catalysts, and synthetic strategies for engineering their structures to yield high active site densities for the HER. Over the past decade, there have been a number of important developments relating to the active sites present in MoS x catalysts. For example, the edge of MoS2 was first proposed as a catalytic active site by theoretical calculations in 2005 [20], which was later experimentally demonstrated with a model catalyst composed of MoS2 nanoparticles grown on a Au(111) surface [21]. Since then, various studies focused on maximizing active edge site densities via structural engineering approaches of MoS x catalysts, including space-confined growth [22,23,24,25], vertical alignment [26,27,28], nano-assembly [29,30,31], and the design of biomimetic molecular catalysts [32,33,34]. The basal planes of MoS2, which were believed to be inert in the HER, have also been successfully activated to show meaningful activity by several strategies, including phase engineering from the 2H phase to the metallic 1T phase [26, 35,36,37,38], heteroatom doping [39,40,41], defect site generation [42,43,44,45,46,47], and strain engineering [40, 48, 49]. Despite significant investigations into the structural engineering of MoS2-based electrocatalysts to enhance the HER performance, a number of questions remain regarding the active sites and reaction mechanisms. For example, in the case of the amorphous MoS x , identification of its active sulfur sites for hydrogen adsorption has not yet been clarified due to its structural complexity [50,51,52,53,54]. In this review, we first discuss the basic concepts for the electrocatalytic production of hydrogen, in addition to the activity parameters commonly employed for evaluation of the HER activity. We highlight a number of important results regarding the active sites of MoS x -based HER catalysts, and summarize representative synthetic strategies for engineering their structures to enhance the number of active sites in different 2H-MoS2, 1T-MoS2, and amorphous MoS x structures. We conclude the review by highlighting the current challenges and future research directions in relation to MoS x -based HER catalysts. Hydrogen evolution reaction Hydrogen evolution from water The electrocatalytic production of hydrogen via water splitting is composed of two half reactions: $$\begin{aligned} {\text{Anode}}{:} \, & 2 {\text{H}}_{ 2} {\text{O }} \leftrightarrow {\text{ O}}_{ 2} + {\text{ 4H}}^{ + } + {\text{ 4e}}^{ - } \left( {{\text{oxygen evolution reaction}},\,{\text{OER}}} \right) \\ & {\text{E}}_{\text{a}} = { 1}. 2 3 {\text{ V}} - 0.0 5 9\cdot{\text{pH }}\left( {{\text{V vs}}.{\text{ normal hydrogen electrode}},\,{\text{NHE}}} \right) \\ \end{aligned}$$ $$\begin{aligned} {\text{Cathode}}{:} \,& 4 {\text{H}}^{ + } + {\text{ 4e}}^{ - } \leftrightarrow {\text{ 2H}}_{ 2} \left( {{\text{hydrogen evolution reaction}},\,{\text{HER}}} \right) \\ &{\text{E}}_{\text{c}} = \, 0{\text{ V}} - 0.0 5 9\cdot{\text{pH }}\left( {{\text{V vs}}.\,{\text{NHE}}} \right) \\ \end{aligned}$$ In this review, we will focus on the HER taking place at the cathode. Thermodynamically, the HER occurs with 0 V (vs. reversible hydrogen electrode, RHE) of applied potential, as shown in Fig. 1. However, for practical operations, a large excess potential, i.e., the overpotential (η c ), is required, and the development of highly efficient HER electrocatalysts is directly linked to the reduction of the overpotential. I-V curve for the full water splitting reaction HER activity parameters Overpotential to drive a current density of −10 mA cm−2 Generally, the comparison of HER activities has been made in terms of the overpotential at a current density of −10 mA cm−2. This current density corresponds to an efficiency of approximately 10% in solar-to-fuel devices [84], and the derivation is as follows: Integration of the solar spectrum (AM1.5G) yields a value of 100 mW cm−2, referred to as "1 sun". As the redox potential for water oxidation is ~1.2 V, an 100% efficient solar-to-fuel device would give 100 (mA V cm−2)/(1.2 V) = 83 mA cm−2 under AM1.5G. Thus, a 10% efficient solar-to-fuel device would give 8.3 mA cm−2. Therefore, the ranking of HER catalysts by comparison of the overpotentials required to drive a current density of −10 mA cm−2 is reasonable in a practical context. Tafel slope The Tafel slope is an important kinetic parameter, and is derived from the equation: $$\left| \eta \right| = \frac{2.3RT}{\alpha nF}log\frac{J}{{J_{0} }}$$ where η is the overpotential, R is the ideal gas constant, T is the absolute temperature, α is the electrochemical transfer coefficient, n is the number of electrons involved in the electrode reaction, F is the Faraday constant, J is the measured current density, and J 0 is the exchange current density. In addition, J 0 is related to the electron-transfer rate of the reaction, reflecting the intrinsic catalytic activity of the catalyst. From the above equation, the Tafel slope is defined as \(\frac{2.3RT}{\alpha nF}\) and bears the units mV dec−1. The Tafel slope is therefore determined from the linear regression line of the Tafel plots (η vs. logJ), which can be derived from the I-V polarization curve. The Tafel slope has been used to access the HER mechanism taking place. More specifically, in an acid, the HER proceeds by the initial adsorption of a hydrogen atom on the catalyst surface \(({\text{H}}_{{({\text{aq}})}}^{ + } + {\text{ e}}^{ - } \leftrightarrow {\text{H}}_{\text{ad}} )\), which is referred to as the Volmer step. Subsequently, molecular hydrogen is produced via the chemical recombination of two Had atoms (2Had ↔ H2 (g); the Tafel step), or through a second electron transfer (\(({\text{H}}_{{({\text{aq}})}}^{ + } + {\text{ H}}_{\text{ad}} + {\text{ e}}^{ - } \leftrightarrow {\text{H}}_{{ 2 { }({\text{g}})}}\); the Heyrovsky step) [7]. The HER following the Volmer–Tafel mechanism gives rise to Tafel slope of 29 mV dec−1, whereas the HER via the Volmer–Heyrovsky mechanism yields 38 mV dec−1. In both cases, the combination of two hydrogen atoms is the rate-determining step with lower value indicating faster reaction rate. When the Volmer step is the rate-determining step or the catalyst surface coverage is close to 1, the Tafel slope increases to 116 mV dec−1. The most of MoS x catalysts have shown Tafel slopes in the range of 60–100 mV dec−1, following the Volmer–Heyrovsky mechanism. Turnover frequency The precise evaluation of each surface site's activity is important to obtain a fundamental understanding of the origin of catalytic activity. The intrinsic activity can be assessed by calculating the turnover frequency (TOF), which is defined as the turnover rate per surface active site. While the comparison of TOFs is meaningful, a fair comparison of TOFs has not yet been carried out due to variations in methods for measuring the active sites in addition to the issues associated with different catalyst structures. Depending on which sites are assigned as active centers, the TOF can be varied by many orders of magnitude. Theoretically, it is widely accepted that hydrogen atoms bind to surface S sites. However, the majority of studies have calculated TOFs by assuming that surface Mo atoms are the active sites, as the multiple chemical states for S render the calculation difficult. An alternative method is measuring an electrochemically active surface area (ECSA), which can probe all catalytically active surface sites, except the inert areas. Indeed, in some cases, ECSA-derived TOFs have afforded a fairer comparison between different electrocatalysts [80, 82]. It should be noted that the main active sites can differ according to the catalyst structure. For example, only edge sites are active in 2H-MoS2, while both edge and basal sites are active in 1T-MoS2. For amorphous MoS x catalysts, the identification of active sites have been difficult due to complexity of their structures. Therefore, a standard method for TOF calculations should be established for fair comparison of MoS x catalysts with different catalyst structures. Gibbs free energy The Gibbs free energy (∆G H) for atomic hydrogen adsorption has been widely used as an activity descriptor for HER catalysts. According to the Sabatier principle, particularly strong or weak interactions between reaction intermediates and catalysts can lower the overall catalyst efficiency. The HER activity therefore exhibits a volcano-shaped relationship as a function of ∆G H [21]. Among the various catalysts examined, Pt catalyst exhibits the highest HER performance, with a ∆G H value close to zero. It has been employed as a figure-of-merit for examining the HER performances of newly developed electrocatalysts. Generally, ∆G H values are deduced by theoretical calculations using a simplified model to reflect the experimental conditions. Active sites in molybdenum sulfides Active edge sites The active edge sites of MoS x -based HER electrocatalysts bear a remarkable resemblance to those of the nitrogenase and hydrogenase enzymes, which exhibit excellent activities and selectivities in their natural systems (Fig. 2a) [20]. To date, considerable progress has been made to synthesize highly active and stable heterogeneous electrocatalysts whose structures mimic the active sites of such natural catalysts [32,33,34]. In 2005, theoretical calculations by Nørskov and coworkers revealed that the ∆G H value of the MoS2 edge is almost thermo-neutral, suggesting that the MoS2 edge is a highly plausible active site for the HER (Fig. 2b) [20]. Indeed, this was demonstrated experimentally using a model catalyst comprising MoS2 nanoparticles grown on a Au(111) surface (Fig. 2c) [21]. This work revealed that the electrochemical HER activity exhibits a linear correlation with edge length of MoS2 (Fig. 2d). These pioneering works have provided an important evidence that the active sites of MoS2 are located at the edge planes. Since these findings were reported, significant advances have been achieved in increasing the density of active edge sites [22,23,24,25,26,27,28,29,30,31,32,33,34], as will be discussed in Sect. 4.1. (Figures reprinted with permission from Refs. [20, 21]) a Active sites of the nitrogenase and hydrogenase, and depiction of the Mo-edge on MoS2 slab. b Free energy diagram for hydrogen adsorption. c STM image of MoS2 nanoparticles on a Au(111) surface. d The exchange current density versus the MoS2 edge length Depending on the size of the MoS2 nanosheets employed, the edges of MoS2 can be covered with 0, 50, 75, or 100% sulfur atoms [55, 85, 86], where the sulfur coverage on the edges can significantly affect the adsorption of H atoms, which is directly correlated to HER kinetics. DFT calculations revealed that the most favorable edge configurations correspond to Mo edges covered by 50% S, where ∆GH = 0.06 eV [87]. In addition to the sulfur coverage, the hydrogen coverage on the edges is also an important factor to determine the hydrogen binding energy [88]. Active sites in 2H- and 1T-MoS2 MoS2 has several polymorphs with distinct atomic configurations and electronic structures. Among these polymorphs, 2H-MoS2 and 1T-MoS2 structures are the most widely investigated for use as electrocatalysts in the HER. These polymorphs exhibit trigonal prismatic and octahedral unit cell structures, respectively (Fig. 3a–c) [17]. In addition, 1T-MoS2 has a dense atomic configuration in the basal surfaces and a high electronic conductivity, which is six orders of magnitude greater than that of 2H-MoS2, thereby resulting in an enhanced HER performance of 1T-MoS2 compared to 2H-MoS2. However, the preparation of 1T-MoS2 is more complicated than that of 2H-MoS2, due to its metastable nature. In this regard, significant efforts have been directed to prepare the stable and highly pure 1T-MoS2 polymorph using chemical Li-intercalation [35, 36], electrochemical Li-intercalation [26, 37], and a pressurized hydrothermal process [38]. For example, The Chhowalla group prepared the 1T phase MoS2 through an exfoliation reaction using lithium borohydride, where a significantly higher yield was obtained than that using n-butyllithium (i.e., 80% vs. ~50%) [36]. Using the exfoliated 1T-MoS2 sample, the main active sites of the two polymorphs, i.e., 2H-MoS2 and 1T-MoS2, were compared. Upon partial oxidation of the edges, the HER activity of 2H-MoS2 was significantly reduced, while that of 1T-MoS2 remained unaffected, suggesting that the main active site of the 2H-MoS2 polymorph is located at the edge sites, while that of 1T MoS2 may be located at basal sites. (Fig. 3d, e). (Figures reprinted with permission from Refs. [17, 35, 36]) a Unit cell structures of 2H-MoS2 and 1T-MoS2. HRTEM images for b 2H-MoS2 and c 1T-MoS2. d Schematic representation of the oxidation process and partial restoration of the MoS2 edges after several voltammetric cycles. e HER polarization curves of 1T and 2H MoS2 before and after edge oxidation. Dashed lines indicate the iR-corrected polarization curves Activating inert basal sites The basal plane of the 2H-MoS2 polymorph has long been considered inert towards the HER, essentially rendering the large basal surface useless. However, several theoretical calculations have suggested that the inert basal planes of MoS2 can be exploited as potential active sites following activation by heteroatom doping, defect site generation, and strain engineering (Fig. 4) [39,40,41,42,43,44,45,46,47,48,49]. For example, heteroatom doping into MoS2 can produce high dopant concentrations on the surface, thereby modifying the hydrogen absorption strength of nearby surface atoms (Fig. 4a). More specifically, Du and co-workers suggested that doping MoS2 with a heteroatom in combination with a small compressive strain can yield an ideal ∆G H for hydrogen binding in the HER (Fig. 4b, c) [40]. In addition, defects are known to perturb the local density of states, creating additional energy levels below the conduction bands [89]. In this context, Wang and co-workers evaluated the effect of sixteen different structural defects on activating the basal plane of MoS2 monolayers (Fig. 4d) [43]. The theoretical results suggested that six defects, including sulfur vacancies, greatly enhanced the HER performance of MoS2 (Fig. 4e). These theoretical findings were later verified experimentally, as discussed in Sects. 4.3 and 4.4. a The volcano-shaped relationship between the (log(i0)) and ∆G H. Free energy diagram for the HER on b Rh-doped MoS2, and c Ag-doped MoS2, at different strains. d 16 Examples of different structural defects. e ∆G H of the HER process on each defect region Active sites in amorphous MoS x In addition to the crystalline MoS2 structure, amorphous MoS x has attracted significant attention due to its facile preparation under mild conditions, such as wet chemical synthesis [60] and electrodeposition [58, 59, 61]. Unlike MoS2, the active sites present in amorphous MoS x have received little attention due to the complex polymeric structure of such compounds (Fig. 5a) [52]. It contains short-range atomic arrangements with [Mo3S13]2− clusters as building units (Fig. 5b). Similar to MoS2, the question of active sites on amorphous MoS x has been unavoidably and continuously raised. Recently, the sulfur atoms present in amorphous MoS x have been directly confirmed as the catalytic active sites for the HER via operando Raman spectroscopic analysis [54]. However, the sulfur atoms exist in four key states, namely bridging S2 2−, terminal S2 2−, unsaturated S2−, and apical S2−. Due to the diverse sulfur chemical states present in the amorphous MoS x , identification of the catalytically active sulfur sites for proton reduction is challenging. Interestingly, Yeo and co-workers reported a linear correlation between TOFs for the HER and the percentage of S species with higher electron binding energies using X-ray photoelectron spectroscopy (XPS) (Fig. 5c) [53]. This work suggested bridging S2 2− species as the potential catalytic active sites. In addition, Yano and Hu and co-workers investigated the structural changes taking place in the amorphous MoS x under HER conditions using in situ X-ray absorption spectroscopy (Fig. 5d) [51]. They proposed a reaction mechanism, where the catalytic species is similar to MoS2, which corroborates an earlier result by Nilsson and Jaramillo and co-workers [50]. Although significant efforts have been devoted to revealing the active sites for proton reduction, the identification and confirmation of genuine catalytically active sulfur sites remain elusive. (Figures reprinted with permission from Refs. [51,52,53]) a Coordination polymeric structure of the amorphous MoS x containing [Mo3S13]2− building blocks. b Arrangement of the cluster units in a 1D unfolding chain. c TOFs versus the percentage of S atoms with high electron binding energy. d Proposed catalytic cycle for the HER over amorphous MoS x Synthetic strategies for increasing active site densities Nanospace-confined growth Reducing the particle size of MoS2 is the most straightforward method that can increase the density of active edge sites. However, thermodynamics tends to favor growth through the basal plane because the formation of edge sites is highly energetic due to the under-coordinated atomic configuration. To overcome this challenge, confinement growth within a nanospace has been reported [22,23,24,25]. The Jaramillo group successfully synthesized a mesoporous MoS2 structure using a silica template with double-gyroid (DG) morphology (Fig. 6a) [22]. The resulting DG MoS2 structure exhibited a high surface curvature, thereby exposing a large fraction of active edge sites. The DG MoS2 exhibited a higher HER performance than high aspect-ratio core–shell MoO3–MoS2 nanowires (Fig. 6b). In addition, the DG MoS2 gives a Tafel slope of 50 mV dec−1, which is relatively low compared to previously reported MoS2-based HER catalysts (Fig. 6c). In this direction, our group prepared layer number-controlled MoS2 nanosheets using the nanospace confined-growth approach [23]. In this work, the pores of mesoporous silica templates were partially filled with carbon, and MoS2 structures were subsequently grown inside the residual nanospace of the silica-carbon composites. After the etching of silica template, MoS2 nanostructures embedded in the frameworks of ordered mesoporous carbons (MoS2@OMC) were generated. As shown in the TEM images (Fig. 6d), the formation of an extended basal plane was successfully hindered with a size <5 nm in the lateral direction. It was found that the TOF increased upon decreasing the layer number in MoS2, and this trend in activity could be correlated to the physical and chemical properties of MoS2 nanoplates (Fig. 6e). These results indicate that space confinement growth paves the way to controlling the surface structure and size of MoS2 at the nanoscale to ultimately develop effective catalysts with high densities of active edge sites at the surface. a Synthetic procedure and structural model for the mesoporous MoS2 with a double-gyroid (DG) morphology. b Cyclic voltammogram of DG MoS2 (1 min sample) versus core–shell MoO3-MoS2 nanowires (NW) at 5 mV s−1. c Tafel plot of DG MoS2 (1 min sample) versus core–shell MoO3-MoS2 NW. d Synthetic procedure for the MoS2 nanosheets embedded on ordered mesoporous carbon nanorods (MoS2@OMC), and TEM images for 1L-, 2L-, 3L-, and 4L-MoS2@OMC. e TOFs versus the number of layers in the MoS2 Phase engineering As mentioned in Sect. 3.2, the active sites, which were limited to the edges in 2H-MoS2, have been expanded to basal surfaces via phase engineering from trigonal prismatic (2H-MoS2) to metallic octahedral (1T-MoS2) structures (Fig. 7a, b). For example, the Jin group demonstrated that the chemical exfoliation of 2H-MoS2 using n-butyllithium significantly enhances the HER activity through the formation of metallic 1T-MoS2 [35]. This phenomenon was ascribed to the fast electron transport and increased number of active sites guaranteed by the metallic 1T-MoS2 nanosheets. These 1T-MoS2 nanosheets required a low overpotential of 187 mV to drive a current density of −10 mA cm−2, compared to ~313 mV for 2H-MoS2 (Fig. 7c). Furthermore, the Tafel slope of 43 mV dec−1 for 1T-MoS2 was significantly lower than that of 2H-MoS2 (i.e., 110 mV dec−1), indicating fast HER kinetics in 1T-MoS2 (Fig. 7d). As an alternative phase engineering technique, the Cui group reported the use of an electrochemical Li intercalation method to generate the 1T-MoS2 phase (Fig. 7e) [26]. This method allowed the vertically-aligned 2H-MoS2 to be converted into 1T-MoS2, which exhibited an enhanced HER performance (Fig. 7f, g). In addition to the intercalation of Li ions, mechanical strains also induced the partial formation of 1T-MoS2 structures, thereby activating the HER [67]. However, despite numerous reports focusing on the use of phase engineering to enhance HER performances, the origin of the HER activity has not yet been completely elucidated. Decoupling of the intrinsic activities of 1T phase from the overall HER activity is required to reach a fundamental understanding of the active sites present in MoS2 polymorphs. (Figures reprinted with permission from Refs. [9, 26, 35]) Dark-field scanning transmission electron microscopy images of a 2H-MoS2, and b 1T-MoS2. c Polarization curves of the 2H- and 1T-MoS2 for the HER, and d the corresponding Tafel plots. Filled symbols indicate the iR-corrected data. e Galvanostatic discharge curve representing the lithiation process. f Polarization curves of the pristine and lithiated MoS2 for the HER, and g the corresponding Tafel plots Heteroatom-doping The incorporation of heteroatoms into the basal surface of MoS2 nanosheets can significantly modify the electronic structure of in-plane S atoms neighboring the heteroatom, thereby altering the adsorption strength of H atoms. In this context, the Bao group reported the doping of single Pt atoms into the in-plane domain of MoS2 nanosheets (Pt-MoS2) [39], where the resulting Pt-MoS2 exhibited an enhanced HER performance compared with the undoped MoS2. Furthermore, they also screened the HER activities of MoS2 doped with a number of transition metals, resulting in volcano-shaped relationships with the adsorption free energy of the H atoms (∆G H) (Fig. 4a). Their study suggests a novel method for activating the inert in-plane domain of MoS2 catalysts, which may also be extended to other 2D materials applicable in a variety of catalytic reactions. Defect and strain engineering The inert basal surfaces of 2H-MoS2 have also been successfully activated by creating defect sites and/or inducing strain [42,43,44,45,46,47]. The first example of defect engineering conducted by Xie and co-workers focused on the exposure of additional active edge planes by forming cracks on the surfaces of nanosheets (Fig. 8a) [42]. They reported that defect-rich MoS2 exhibited a significantly enhanced HER performance compared with defect-free MoS2 (Fig. 8b). In addition, the Ajayan group demonstrated that oxygen plasma treatment and H2 annealing introduced additional active sites within the MoS2 monolayer, significantly improving the HER activity [44]. Recently, more rational and controllable defect modulation has been reported through combined experimental and theoretical studies [45,46,47]. Allwood and co-workers prepared MoS2 nanocrystals and activated the Mo atoms in the basal surface of MoS2 nanocrystals by S depletion [45], with the resulting activated MoS2 exhibiting an very high HER performance (~150 mV at −10 mA cm−2 and a Tafel slope of ~29 mV dec−1). Cao and co-workers also verified the importance of S vacancies on the catalytic activity for the HER [46], estimating the intrinsic TOFs of the edge sites, S vacancies, and grain boundaries as approximately 7.5, 3.2, and 0.1 s−1, respectively. Finally, the Zheng and Nørskov groups reported that straining of the S-vacancies further enhances the HER activity (Fig. 8c–e) [48]. The experimental results was further verified with theoretical results that optimum level of strain and S-vacancy can tune the ΔG H close to zero, guaranteeing the highest intrinsic HER activity. a Structural models of the defect-free and defect-rich MoS2 structures. b Polarization curves of the defect-free and defect-rich MoS2 structures for the HER. c Schematic representations of the top and side views of MoS2 containing strained S-vacancies on the basal planes. d Free energy versus the HER reaction coordinate for the S-vacancy range of 0–25%. e Polarization curves for the strained, vacancy, and strained vacancy MoS2 for the HER Summary and future challenges In this review, we have highlighted recent major achievements regarding elucidation of the active sites in MoSx-based electrocatalysts for the HER, and summarized synthetic strategies for designing MoSx electrocatalysts with enhanced HER activities. The edge site of MoS2 was initially identified as the active site for the HER, which then triggered the development of new HER catalysts that can maximize the density of active edge sites, thereby boosting HER activity. The basal surface of MoS2, which were previously believed to be inert towards the HER, can be converted into HER active species via appropriate structural engineering, which include phase transformation, heteroatom doping, defect site generation, and strain engineering. In addition to the crystalline MoS2, amorphous MoSx has also been extensively studied as an efficient electrocatalyst for the HER. Amorphous MoSx can possess abundant active edge structures originating from the building blocks of [Mo3S13]2− clusters, however, the multiple chemical states of sulfur in such species hamper identification of the actual active sulfur states. A comprehensive and systematic study to reveal the key active sites in different MoSx structures still remains a challenging task. Understanding of active sites would enable high-performance MoSx elecrocatalysts for the HER. J.A. Turner, Science 305, 972 (2004) J.K. Norskov, C.H. Christensen, Science 312, 1322 (2006) J.N. Armor, Appl. Catal. A 176, 159 (1999) A. Haryanto, S. Fernando, N. Murali, S. Adhikari, Energy Fuels 19, 2098 (2005) J. Ivy, Summary of electrolytic hydrogen production milestone completion report, U.S. Department of Energy (2004) T.E. Mallouk, Nat. Chem. 5, 362 (2013) D. Merki, X. Hu, Energy Environ. Sci. 4, 3878 (2011) M.-R. Gao, Y.-F. Xu, J. Jiang, S.-H. Yu, Chem. Soc. Rev. 42, 2986 (2013) M. Chhowalla, H.S. Shin, G. Eda, L.-J. Li, K.P. Loh, H. Zhang, Nat. Chem. 5, 263 (2013) C.G. Morales-Guio, L.-A. Stern, X. Hu, Chem. Soc. Rev. 43, 6555 (2014) J.D. Benck, T.R. Hellstern, J. Kibsgaard, P. Chakthranont, T.F. Jaramillo, ACS Catal. 4, 3957 (2014) J. Yang, H.S. Shin, J. Mater. Chem. A 2, 5979 (2014) Y. Yan, B. Xia, Z. Xu, X. Wang, ACS Catal. 4, 1693 (2014) M. Zeng, Y. Li, J. Mater. Chem. A 3, 14942 (2015) Y. Zheng, Y. Jiao, M. Jaroniec, S.Z. Qiao, Angew. Chem. Int. Ed. 54, 52 (2015) P.C.K. Vesborg, B. Seger, I. Chorkendorff, J. Phys. Chem. Lett. 6, 951 (2015) R. Lv, J.A. Robinson, R.E. Schaak, D. Sun, Y. Sun, T.E. Mallouk, M. Terrones, Acc. Chem. Res. 48, 56 (2015) D. Voiry, J. Yang, M. Chhowalla, Adv. Mater. 28, 6197 (2016) Q. Ding, B. Song, P. Xu, S. Jin, Chemistry 1, 699 (2016) B. Hinnemann, P.G. Moses, J. Bonde, K.P. Jørgensen, J.H. Nielsen, S. Horch, I. Chorkendorff, J.K. Nørscov, J. Am. Chem. Soc. 127, 5308 (2005) T.F. Jaramillo, K.P. Jørgensen, J. Bonde, J.H. Nielsen, S. Horch, I. Chorkendorff, Science 317, 100 (2007) J. Kibsgaard, Z. Chen, B.N. Reinecke, T.F. Jaramillo, Nat. Mater. 11, 963 (2012) B. Seo, G.Y. Jung, Y.J. Sa, H.Y. Jeong, J.Y. Cheon, J.H. Lee, H.Y. Kim, J.C. Kim, H.S. Shin, S.K. Kwak, S.H. Joo, ACS Nano 9, 3728 (2015) C. Zhu, X. Mu, P.A. van Aken, Y. Yu, J. Maier, Angew. Chem. Int. Ed. 53, 2152 (2014) X. Zheng, J. Xu, K. Yan, H. Wang, Z. Wang, S. Yang, Chem. Mater. 26, 2344 (2014) H. Wang, Z. Lu, S. Xu, D. Kong, J.J. Cha, G. Zheng, P.-C. Hsu, K. Yan, D. Bradshaw, F.B. Prinz, Y. Cui, Proc. Natl. Acad. Sci. 110, 19701 (2013) D. Kong, H. Wang, J.J. Cha, M. Pasta, K.J. Koski, J. Yao, Y. Cui, Nano Lett. 13, 1341 (2013) M. Chatti, T. Gengenbach, R. King, L. Spiccia, A.N. Simonov, Chem. Mater. 29, 3092 (2017) T. Wang, L. Liu, Z. Zhu, P. Papakonstantinou, J. Hu, H. Liu, M. Li, Energy Environ. Sci. 6, 625 (2013) J. Ding, Y. Zhou, Y. Li, S. Guo, X. Huang, Chem. Mater. 28, 2074 (2016) D.Y. Chung, S.-K. Park, Y.-H. Chung, S.-H. Yu, D.-H. Lim, N. Jung, H.C. Ham, H.-Y. Park, Y. Piao, S.J. Yoo, Y.-E. Sung, Nanoscale 6, 2131 (2014) T.F. Jaramillo, J. Bonde, J. Zhang, B.-L. Ooi, K. Andersson, J. Ulstrup, I. Chorkendorff, J. Phys. Chem. C 112, 17492 (2008) H.I. Karunadasa, E. Montalvo, Y. Sun, M. Majda, J.R. Long, C.J. Chang, Science 335, 698 (2012) J. Kibsgaard, T.F. Jaramillo, F. Besenbacher, Nat. Chem. 6, 248 (2014) M.A. Lukowski, A.S. Daniel, F. Meng, A. Forticaux, L. Li, S. Jin, J. Am. Chem. Soc. 135, 10274 (2013) D. Voiry, M. Salehi, R. Silva, T. Fujita, M. Chen, T. Asefa, V.B. Shenoy, G. Eda, M. Chhowalla, Nano Lett. 13, 6222 (2013) H. Wang, Z. Lu, D. Kong, J. Sun, T.M. Hymel, Y. Cui, ACS Nano 8, 4940 (2014) X. Geng, W. Sun, W. Wu, B. Chen, A. Al-Hilo, M. Benamara, H. Zhu, F. Watanabe, J. Cui, T. Chen, Nat. Commun. 7, 10672 (2016) J. Deng, H. Li, J. Xiao, Y. Tu, D. Deng, H. Yang, H. Tian, J. Li, P. Ren, X. Bao, Energy Environ. Sci. 8, 1594 (2015) G. Gao, Q. Sun, A. Du, J. Phys. Chem. C 120, 16761 (2016) D. Escalera-López, Y. Niu, J. Yin, K. Cooke, N.V. Rees, R.E. Palmer, ACS Catal. 6, 6008 (2016) J. Xie, H. Zhang, S. Li, R. Wang, X. Sun, M. Zhou, J. Zhou, X.W. Lou, Y. Xie, Adv. Mater. 25, 5807 (2013) Y. Ouyang, C. Ling, Q. Chen, Z. Wang, L. Shi, J. Wang, Chem. Mater. 28, 4390 (2016) G. Ye, Y. Gong, J. Lin, B. Li, Y. He, S.T. Pantelides, W. Zhou, R. Vajtai, P.M. Ajayan, Nano Lett. 16, 1097 (2016) L. Lin, N. Miao, Y. Wen, S. Zhang, P. Ghosez, Z. Sun, D.A. Allwood, ACS Nano 10, 8929 (2016) G. Li, D. Zhang, Q. Qiao, Y. Yu, D. Peterson, A. Zafar, R. Kumar, S. Curtarolo, F. Hunte, S. Shannons, Y. Zhu, W. Yang, L. Cao, J. Am. Chem. Soc. 138, 16632 (2016) C. Tsai, H. Li, S. Park, J. Park, H.S. Han, J.K. Nørskov, X. Zheng, F. Abild-Pedersen, Nat. Commun. 8, 15113 (2017) H. Li, C. Tsai, A.L. Koh, L. Cai, A.W. Contryman, A.H. Fragapane, J. Zhao, H.S. Han, H.C. Manoharan, F. Abild-Pedersen, J.K. Nørskov, X. Zheng, Nat. Mater. 15, 48 (2016) H. Li, M. Du, M.J. Mleczko, A.L. Koh, Y. Nishi, E. Pop, A.J. Bard, X. Zheng, J. Am. Chem. Soc. 138, 5123 (2016) H.G. Sanchez Casalongue, J.D. Benck, C. Tsai, R.K.B. Karlsson, S. Kaya, M.L. Ng, L.G.M. Pettersson, F. Abild-Pedersen, J.K. Nørskov, H. Ogasawara, T.F. Jaramillo, A. Nilsson, J. Phys. Chem. C 118, 29252 (2014) B. Lassalle-Kaiser, D. Merki, H. Vrubel, S. Gul, V.K. Yachandra, X. Hu, J. Yano, J. Am. Chem. Soc. 137, 314 (2015) P.D. Tran, T.V. Tran, M. Orio, S. Torelli, Q.D. Truong, K. Nayuki, Y. Sasaki, S.Y. Chiam, R. Yi, I. Honma, J. Barber, V. Artero, Nat. Mater. 15, 640 (2016) L.R.L. Ting, Y. Deng, L. Ma, Y.-J. Zhang, A.A. Peterson, B.S. Yeo, ACS Catal. 6, 861 (2016) Y. Deng, L.R.L. Ting, P.H.L. Neo, Y.-J. Zhang, A.A. Peterson, B.S. Yeo, ACS Catal. 6, 7790 (2016) J. Bonde, P.G. Moses, T.F. Jaramillo, J.K. Nørskov, I. Chorkendorff, Faraday Discuss. 140, 219 (2008) Y. Li, H. Wang, L. Xie, Y. Liang, G. Hong, H. Dai, J. Am. Chem. Soc. 133, 7296 (2011) Z. Chen, D. Cummins, B.N. Reinecke, E. Clark, M.K. Sunkara, T.F. Jaramillo, Nano Lett. 11, 4168 (2011) D. Merki, S. Fierro, H. Vrubel, X. Hu, Chem. Sci. 2, 1262 (2011) D. Merki, H. Vrubel, L. Rovelli, S. Fierro, X. Hu, Chem. Sci. 3, 2515 (2012) J.D. Benck, Z. Chen, L.Y. Kuritzky, A.J. Forman, T.F. Jaramillo, ACS Catal. 2, 1916 (2012) H. Vrubel, X. Hu, ACS Catal. 3, 2002 (2013) Y. Yu, S.-Y. Huang, Y. Li, S.N. Steinmann, W. Yang, L. Cao, Nano Lett. 14, 553 (2014) D.J. Li, U.N. Maiti, J. Lim, D.S. Choi, W.J. Lee, Y. Oh, G.Y. Lee, S.O. Kim, Nano Lett. 14, 1228 (2014) M.-R. Gao, M.K.Y. Chan, Y. Sun, Nat. Commun. 6, 7493 (2015) D. Kiriya, P. Lobaccaro, H.Y.Y. Nyein, P. Taheri, M. Hettick, H. Shiraki, C.M. Sutter-Fella, P. Zhao, W. Gao, R. Maboudian, J.W. Ager, A. Javey, Nano Lett. 16, 4047 (2016) D.R. Cummins, U. Martinez, A. Sherehiy, R. Kappera, A. Martinez-Garcia, R.K. Schulze, J. Jasinski, J. Zhang, R.K. Gupta, J. Lou, M. Chhowalla, G. Sumanasekera, A.D. Mohite, M.K. Sunkara, G. Gupta, Nat. Commun. 7, 11857 (2016) J.H. Lee, W.S. Jang, S.W. Han, H.K. Baik, Langmuir 30, 9866 (2014) J. Deng, H. Li, S. Wang, D. Ding, M. Chen, C. Liu, Z. Tian, K.S. Novoselov, C. Ma, D. Deng, X. Bao, Nat. Commun. 8, 14430 (2017) D. Voiry, H. Yamaguchi, J. Li, R. Silva, D.C.B. Alves, T. Fujita, M. Chen, T. Asefa, V.B. Shenoy, G. Eda, M. Chhowalla, Nat. Mater. 12, 850 (2013) J. Yang, D. Voiry, S.J. Ahn, D. Kang, A.Y. Kim, M. Chhowalla, H.S. Shin, Angew. Chem. Int. Ed. 52, 13751 (2013) B. Seo, H.Y. Jeong, S.Y. Hong, A. Zak, S.H. Joo, Chem. Commun. 51, 8334 (2015) D.-Y. Wang, M. Gong, H.-L. Chou, C.-J. Pan, H.-A. Chen, Y. Wu, M.-C. Lin, M. Guan, J. Yang, C.-W. Chen, Y.-L. Wang, B.-J. Hwang, C.-C. Chen, H. Dai, J. Am. Chem. Soc. 137, 1587 (2015) N. Kornienko, J. Resasco, N. Becknell, C.-M. Jiang, Y.-S. Liu, K. Nie, X. Sun, J. Guo, S.R. Leone, P. Yang, J. Am. Chem. Soc. 137, 7448 (2015) D. Yoon, B. Seo, J. Lee, K.S. Nam, B. Kim, S. Park, H. Baik, S.H. Joo, K. Lee, Energy Environ. Sci. 9, 850 (2016) H. Vrubel, X. Hu, Angew. Chem. Int. Ed. 51, 12703 (2012) W.-F. Chen, C.-H. Wang, K. Sasaki, N. Marinkovic, W. Xu, J.T. Muckerman, Y. Zhu, R.R. Adzic, Energy Environ. Sci. 6, 943 (2013) Y.-T. Xu, X. Xiao, Z.-M. Ye, S. Zhao, R. Shen, C.-T. He, J.-P. Zhang, Y. Li, X.-M. Chen, J. Am. Chem. Soc. 139, 5285 (2017) W.-F. Chen, K. Sasaki, C. Ma, A.I. Frenkel, N. Marinkovic, J.T. Muckerman, Y. Zhu, R.R. Adzic, Angew. Chem. Int. Ed. 51, 6131 (2012) P. Liu, J.A. Rodriguez, J. Am. Chem. Soc. 127, 14871 (2005) E.J. Popczun, J.R. Mckone, C.G. Read, A.J. Biacchi, A.M. Wiltrout, N.S. Lewis, R.E. Schaak, J. Am. Chem. Soc. 135, 9267 (2013) J. Tian, Q. Liu, A.M. Asiri, X. Sun, J. Am. Chem. Soc. 136, 7587 (2014) B. Seo, D.S. Baek, Y.J. Sa, S.H. Joo, CrystEngComm 18, 6083 (2016) H. Tributsch, J.C. Bennett, J. Electroanal. Chem. 81, 97 (1977) Y. Gorlin, T.F. Jaramillo, J. Am. Chem. Soc. 132, 13612 (2010) L.P. Hansen, Q.M. Ramasse, C. Kisielowski, M. Brorson, E. Johnson, H. Topsøe, S. Helveg, Angew. Chem. Int. Ed. 50, 10153 (2011) W. Zhou, X. Zou, S. Najmaei, Z. Liu, Y. Shi, J. Kong, J. Lou, P.M. Mjayan, B.I. Yakobson, J.-C. Idrobo, Nano Lett. 13, 2615 (2013) C. Tsai, K. Chan, F. Abild-Pedersen, J.K. Nørskov, Phys. Chem. Chem. Phys. 16, 13156 (2014) C. Tsai, F. Abild-Pedersen, J.K. Nørskov, Nano Lett. 14, 1381 (2014) J. Hong, Z. Hu, M. Probert, K. Li, D. Lv, X. Yang, L. Gu, N. Mao, Q. Feng, L. Xie, J. Zhang, D. Wu, Z. Zhang, C. Jin, W. Ji, X. Zhang, J. Yuan, Z. Zhang, Nat. Commun. 6, 6293 (2015) BS and SHJ wrote the manuscript. Both authors read and approved the final manuscript. This work was supported by the Korea Institute for Advancement of Technology (KIAT) funded by the Ministry of Trade, Industry and Energy (MOTIE) (KIAT_N0001754) and the Korea Evaluation Institute of Industrial Technology (KEIT) funded by the MOTIE (10050509). B.S. acknowledges the Global Ph.D. Fellowship (NRF-2013H1A2A1032647). Department of Chemistry, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan, 44919, Republic of Korea Bora Seo & Sang Hoon Joo School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan, 44919, Republic of Korea Sang Hoon Joo Bora Seo Correspondence to Sang Hoon Joo. Seo, B., Joo, S.H. Recent advances in unveiling active sites in molybdenum sulfide-based electrocatalysts for the hydrogen evolution reaction. Nano Convergence 4, 19 (2017). https://doi.org/10.1186/s40580-017-0112-3 Molybdenum sulfide Electrocatalyst Active site Synthetic strategy Structure engineering Advanced Nanomaterials and Devices for Next Generation Energy Technologies
CommonCrawl
Intestinal parasitic infections and associated factors among street dwellers' in Dessie town, North-East Ethiopia: a cross sectional study Daniel Getacher Feleke1Email author, Edosa Kebede Wage1, Tigist Getachew1 and Alemu Gedefie1 Accepted: 4 May 2019 Intestinal parasitic infections are among the major cause of diseases of public health problems in sub-Saharan Africa. In Ethiopia, epidemiological information on street dwellers is very limited. So, this study aimed to determine the prevalence and associated factors of intestinal parasite among street dwellers' in Dessie town, North-East, Ethiopia. A cross-sectional study was carried out on street dwellers in Dessie town from November 2017 to February, 2018. Stool specimen was examined by direct wet mount, formol-ether concentration technique and modified Ziehl–Neelsen methods. Majority of study participants were males 220 (89.4%). The mean age of the study participants were 22.85 (SD = 4.78) years. The overall parasite prevalence was 108/246 (43.9%). Among the six different intestinal parasites detected, H. nana 33 (13.4) and E. histolytica 24 (9.8%) were dominant. Multivariate analysis showed, shoe wearing habit (P = 0.035), hand washing habit after toilet (P = 0.035), and history of animal contact (P = 0.016) had statistically significant association with intestinal parasitic infections after adjusting other variables. Although the prevalence of intestinal parasitic infections in this study was lower than previous studies conducted in similar study groups. The prevention and control strategies of intestinal parasites should address the poor segment of populations including street dwellers. Street dwellers Intestinal parasitic infections are among major public health problems worldwide [1, 2]. It is estimated that about 3.5 billion people are affected, and that 450 million are ill as a result of these infections [3]. Intestinal parasitic infections are among the major public health problems in sub-Saharan Africa and cause morbidity and mortality [4]. They have been also associated with stunting, physical weakness and low educational performance of schoolchildren [5]. Intestinal parasitic infections are more prevalent among the poor segment of population. They are closely associated with low household income, poor personal and environmental sanitation, and overcrowding, limited access to clean water, tropical climate and low altitude [5, 6]. Street dwellers are among the most deprived people in urban areas, in terms of living conditions and lack of access to basic facilities and health indicators [7]. Access to health care for homeless individuals differs greatly from that for the general population. Street dwellers who visit health facilities may not get treatment due to financial problem and the morbidity is extremely high [7, 8]. Parasitic infections are widely distributed in Ethiopia due to low level of living standards, poor environmental sanitation and personal hygiene [2, 9]. Homeless people do not have access to safe water for drinking and for proper hygiene practice and lack of toilet facilities are the main contributors to the high prevalence of intestinal parasites in street dwellers. Due to lack of health service seeking behavior and treatment denial by health service providers, street dwellers can be a reservoir for intestinal parasites and make the prevention and control challenging. In Ethiopia, epidemiological information on the prevalence and associated factors of intestinal parasites in street dwellers is very limited. So, this study aimed to determine the prevalence and associated factors of intestinal parasite among street dwellers' in Dessie town, North-East, Ethiopia. Study area and study participants This study was conducted from October, 2017 to January, 2018 on street dwellers in Dessie town, North-East, Ethiopia. Dessie town is located 401 km from Addis Ababa, the capital of Ethiopia. It is located at an altitude of 2470 m above sea level in low-shrouded mountains and hills" and the surrounding mountains. Based on the 2007 national census conducted by the Central Statistical Agency of Ethiopia (CSA), Dessie district has a total population of 151,174, of whom 72,932 are men and 78,242 women; 120,095 or 79.44% are urban inhabitants living in the town of Dessie. The number of street dwellers in Dessie town estimated to be more than 3000. This cross-sectional study was conducted from October 2017 to January, 2018 on street dwellers in Dessie town, North-East Ethiopia. Sampling technique and sample size determination Street dwellers that fulfilled the inclusion criteria were selected by random sampling method. Street dwellers whose age is above 2 years and who can able to provide stool specimen were included. Sample size was calculated using single proportion population formula. $$n = \frac{{(Z_{a/2}^{2}) p\left( {1 - p} \right)}}{{d^{2} }}$$ It was calculated using a prevalence of 89.7% (10) with a margin of error 0.04 and a confidence level of 95%. In line with this, 246 study participants were recruited including the 10% non-response rate. The study was conducted after obtaining ethical clearance from Wollo University ethical committee. A written consent form was used to ask the willingness of the study participants/or guardians. Intestinal parasite infected study participants were treated with the appropriate anti-parasitic drugs. The study sites were visited and data collectors were also trained before data collection. An interview based structured questionnaire was used to collect socio-demographic and other data from all street dwellers. The study participants were instructed how to bring the stool sample and were provided clean, dry leak proof labeled container with toilet paper. Laboratory investigation Stool specimen from each street dweller was examined by direct wet mount method using normal saline (0.85% NaCl solution) and Lugol's iodine at the site of stool specimen collection. Stool samples were preserved with 10% formalin and formol-ether concentration technique was performed from each stool specimens. Samples were examined microscopically using the 10× and 40× objective lenses. Coccidian intestinal parasites were examined using modified Ziehl–Neelsen method. Data quality was checked and were entered to SPSS version 20 software and analyzed. Logistic regression was done to investigate the relationship between the dependent and independent variables. P < 0.05 was considered statistically significant. A total of 246 study participants were involved and majority of the study participants were males 220 (89.4%). The mean age of the study participants were 22.85 (SD = 4.78) years (age ranged from 15 to 36 years). Regarding the marital status, 242 (98.4%) of the study participants were single. Addiction associated problems and peer pressure were mentioned by the study participants as a reason for street dwelling with 86 (35.0%) and 84 (34.15%), respectively (Table 1). Socio demographic characteristics of the study participants < 20 > 29 Educational status Enforcing factors for street dwelling Divorce and family problem Pregnancy associated Education associated Street dwelling duration > 2 years Intestinal parasite prevalence in street dwellers The overall parasite prevalence was 108/246 (43.90%). Six different intestinal parasites were detected. H. nana 33 (13.4) and E. histolytica 24 (9.8%) were the dominant helminthes and protozoan parasites, respectively. Taenia species 19 (7.7%) and G. lamblia 13 (5.3%) were also the most frequently detected parasites. The other two intestinal parasites detected were A. lumbricoides 11 (4.5%) and 8 (3.3%) E. vermicularis. Single parasitic infection (35.8%) was prevalent followed by double infections (8.13%) and there were no triple infections. The prevalence of parasitic infection was higher in females. With regard to age of the study participants, intestinal parasitic infections were higher in children younger than 20 years (47.4%). Similarly, the prevalence of intestinal parasitic infection was higher in secondary school (56.25%) completed study participants (Table 2). The prevalence of intestinal parasitic infection among different socio-demographic characteristics Number (%) of study participants Number (%) positives for any parasite 220 (89.4) 1 (0.4) 0 (0.00) 30 (42.85) Secondary school and above 16 (6.5) 9 (56.25) Duration as a street dweller 1 (100.0) Intestinal parasites prevalence was higher among individuals who had not shoe wearing habit (61.53%) and among individuals who had frequent animal contact (55.4%). Similarly, intestinal parasitic infections were also higher in individuals who had not hand washing habit after toilet (45.3%). Univariate and multivariate logistic regression was done to investigate the relationship between dependant and independent variables. Goodness of fit of the model was checked using Hosmer and Lemeshow test (P > 0.05). Backward logistic regression method was used for multivariate analysis. In multivariate analysis, shoe wearing habit (P = 0.035), hand washing habit after toilet (P = 0.035), and history of animal contact (P = 0.016) had statistically significant association with intestinal parasitic infections after adjusting other variables. Multivariate logistic regression analysis was also showed that individuals who had animal contact are more likely to be infected by intestinal parasitic infection (AOR = 2.04, 95% CI 1.14–3.36) (Table 3). The prevalence of intestinal parasites in relation to associated risk factors Intestinal parasite infection Positive n (%) Negative n (%) 0.90 (0.4–2.04) 0.99 (0.43–2.32) Shoe wearing habit Buying from restaurants Individual house Mixed (hotel, individual house and garbage) Type of latrine No latrine 0.77 (0.47–12.4) Public latrine Private latrine History of animal contact Hand washing habit after animal contact Hand washing habit after toilet 0.18 (0.04–089) Hand washing habit before meal Nail trimming habit Open defecation Italic values indicate significance of P-value (P < 0.05) The prevalence of intestinal parasitic infection in street dwellers in the present study was 43.9%. This prevalence was lower than reports from Addis Ababa and Gondar city [1, 8]. This difference could be explained by variations in climatic condition of the study area, socio economic conditions, the methods employed for stool examination and the season of study. Dessie is a cold place which is located 2470 m above sea level. This condition is not considered as the most favorable climate for the existence of many of the intestinal parasites as a result, their occurrence might be reduced. The higher prevalence of intestinal parasites in females than males in the present study was consistent with the study conducted in Addis Ababa while it was not in agreement with a report from Jimma town studied on beggars [8, 10]. In this study six different intestinal parasites were detected and H. nana was the dominant intestinal parasites followed by E. histolytica/dispar, Taenia species and G. lamblia. The higher prevalence of H. nana and E. histolytica/dispar might be associated with improper fecal disposal and consumption of contaminated water, respectively. However, the high prevalence of H. nana and Taenia species in the present study was not in line with other studies conducted in Ethiopia and other countries [1, 8, 10]. The present study agreed with studies that reported geo-helminthes were dominant followed by the E. histolytica/dispar and G. lamblia. This indicates that lack of environmental sanitation and inadequate access of clean water are the main factors that expose street dwellers for intestinal parasitic infections and other communicable diseases. Taenia species was the third highest prevalent parasite next to H. nana and E. histolytica/dispar which was in agreement with a report from a study conducted among street dwellers in Addis Ababa [8]. This might be due to the consumption of unhygienic raw meat street dwellers get from slaughter houses. Compared to result reported from Sudanese street children, the prevalence of G. lamblia and E. histolytica/dispar in the present study was lower. On the other hand, the prevalence of H. nana in the present study was higher than Sudanese report. The higher prevalence of H. nana in the present study might be due to the reason that most of the street dwellers in the present study practiced open defection and the autoinfection characteristics of the parasite. In this study the higher proportion of females were infected with intestinal parasite than males. This finding was in agreement with the study reported from street dwellers in Addis Ababa. Regarding age groups, the rate of intestinal parasitic was slightly higher in individuals younger than 20 years old. This might be due to the hygienic practice and frequent contact with contaminated soil while playing. This was in line with the report from Addis Ababa [8]. In the present study, there was statistically significant association between intestinal parasitic infection and animal contact, hand washing habit after toilet and shoe wearing habit. In general, the prevalence of intestinal parasitic infections was still high in our study. Unless, prevention and control strategies of intestinal parasites addressed this segment of population, it is very challenging to prevent and control intestinal parasites. In this study special diagnostic technique such as scotch tape Entrobious vermicularis and Kato katz for Schistosoma mansoni was not performed due to lack of resources. We would like to acknowledge Wollo University Medical Laboratory Science department technical assistants and other staffs for their cooperation during laboratory investigations. We would also like to thank all our colleagues for their cooperation for the accomplishment of this study. This study was fully funded by the Wollo University. Wollo University has no involvement this study other than providing resources. DGF, EKW, TG and AG involved in proposal writing, designed the study and participated in all implementation stages of the project. DGF and EKW also analyzed the data and finalized the write up of the manuscript. DGF, TG and AG were responsible for critically revising the proposal and the manuscript. All authors reviewed the final manuscript. All authors read and approved the final manuscript. Ethical clearance was obtained from the institutional review board of Wollo University, College of Medicines and Health Sciences. The objective of the study was explained and written consent form was used to ask participants' or guardians' (in case of children) for their willingness. Intestinal parasites infected street dwellers were treated with the appropriate anti-parasitic drugs. Department of Medical Laboratory Science, College of Medicine and Health Sciences, Wollo University, Dessie, Ethiopia Moges F, Kebede Y, Kassu A, Degu G, Tiruneh M, Gedefaw M. Infection with HIV and intestinal parasites among street dwellers in Gondar city, northwest Ethiopia. Jpn J Infect Dis. 2006;59(6):400.PubMedGoogle Scholar Missaye A, Dagnew M, Alemu A, Alemu A. Prevalence of intestinal parasites and associated risk factors among HIV/AIDS patients with pre-ART and on-ART attending Dessie hospital ART clinic, Northeast Ethiopia. AIDS Res Ther. 2013;10(1):7.View ArticleGoogle Scholar Teklemariam Z, Abate D, Mitiku H, Dessie Y. Prevalence of intestinal parasitic infection among HIV positive persons who are naive and on antiretroviral treatment in Hiwot Fana Specialized University Hospital, Eastern Ethiopia. ISRN AIDS. 2013;2013.Google Scholar Huruy K, Kassu A, Mulu A, Worku N, Fetene T, Gebretsadik S, et al. Intestinal parasitosis and shigellosis among diarrheal patients in Gondar teaching hospital, northwest Ethiopia. BMC Res Notes. 2011;4(1):472.View ArticleGoogle Scholar Haftu D, Deyessa N, Agedew E. Prevalence and determinant factors of intestinal parasites among school children in Arba Minch town, Southern Ethiopia. Am J Health Res. 2014;2(5):247–54.View ArticleGoogle Scholar Mengistu A, Gebre-Selassie S, Kassa T. Prevalence of intestinal parasitic infections among urban dwellers in southwest Ethiopia. Ethiop J Health Dev. 2007;21(1):12–7.Google Scholar Uddin MJ, Koehlmoos TL, Ashraf A, Khan A, Saha NC, Hossain M. Health needs and health-care-seeking behaviour of street-dwellers in Dhaka, Bangladesh. Health Policy Plan. 2009;24(5):385–94.View ArticleGoogle Scholar Mekonnen B, Erko B, Legesse M. Prevalence of intestinal parasitic infections and related risk factors among Street Dwellers in Addis Ababa, Ethiopia. J Trop Dis Public Health. 2014. https://doi.org/10.4172/2329-891X.1000132.View ArticleGoogle Scholar Adamu H, Petros B. Intestinal protozoan infections among HIV positive persons with and without antiretroviral treatment (ART) in selected ART centers in Adama, Afar and Dire-Dawa, Ethiopia. Ethiop J Health Dev. 2009. https://doi.org/10.4314/ejhd.v23i2.53230.View ArticleGoogle Scholar Lakew A, Kibru G, Biruksew A. Prevalence of intestinal parasites among street beggars in Jimma town, Southwest Ethiopia. Asian Pac J Trop Dis. 2015;5:S85–8.View ArticleGoogle Scholar
CommonCrawl
Systematic kinetic study of magnesium production using magnesium oxide and carbonic materials at different temperatures Hamid Zahedi ORCID: orcid.org/0000-0002-5379-185X1, Nahid Farzi2 & Nasser Golestani3 The main goal of this study was to determine the industrially best reductant for reduction of magnesium oxide to magnesium with wood charcoal and petroleum coke (petcoke) each in molar ratio 1:1 and 1:2 (oxidant:reductant) at high temperatures. In this study, a new and reliable combination of mathematical modeling and discrete numerical optimization theory by presenting 18 "mathematical filters" not relying only on statistical quantities of fitting (contrary to many similar researches) was introduced. The purpose of these filters was the determination of correct kinetic equation and therefore, the corresponding rate coefficient from among 18 equations most used at present in the challenging field of solid state chemical kinetics. With assistance of a new and fundamental mathematical function and the obtained values of rate coefficients, the function of rate coefficient in temperature was attained. The activation energy was then calculated as a function of temperature using the general definition of activation energy and the determined function for rate coefficient. The comparison between different reducing agents in the different conditions and with relevant previous study was accomplished to determine the best reducing agent from industry standpoint. Also, the areas under experimental data were calculated numerically and utilized for method validation and comparison. It turned out finally that relying only on fitting quantities in the solid state chemical kinetics can readily lead to wrong conclusions about the correct kinetic equation and about the most suitable reducing agent. It is obvious that the erroneous calculations and wrong decisions in the laboratory scale become significant and paramount in industry and this reveals the significance of rigorous mathematical analysis. The magnesium (Mg) element is one of metals in our universe with some interesting properties [1]. One significant feature of magnesium is the ability of it to produce much energy in combustion reaction. The considerable point in burning reaction of magnesium is that it does not produce carbon dioxide, a greenhouse gas. Therefore, it is clear that more attention to this metal can solve some fundamental problems of a society. If magnesium is produced through a clean route, such as using renewable energies, the process will be even "greener". In fact, because of many problems such as greenhouse effect resulting from fossil fuels and also, the decrease of these fuels in the future, a striking endeavor has started for producing clean energies from renewable resources [2,3,4,5,6,7]. In general, renewable energies are of different types and some of them are solar energy, wind energy, tidal energy, geothermal energy, and biomass energy, for example. Among various types of renewable energies, solar energy is an attractive subject for many researchers. It can be said that the human being noticed the importance of solar energy a little late but nonetheless, the subject of solar energy has seen a substantial amount of research projects in various fields [8,9,10,11,12,13,14,15,16,17]. It is obvious that the combination of magnesium production and utilizing renewable energies can be very beneficial for human being in this era. In parallel, the clean and cheap fuels have been always like dreams for all people and especially for scientists, engineers and even, governors. One of the useful and suitable fuels is the hydrogen gas [18]. Therefore, the combination of solar energy and hydrogen production is of fundamental importance for improving industrial processes in order to deal with many difficulties and problems in a society. One of the substances suitable for production of hydrogen is metallic magnesium. Hence, the production of this metal from magnesium oxide (MgO) can be an interesting goal for research and development. Because of high temperatures needed for cleavage of stiff bonds in the crystal of MgO, a carbon-based reducing agent like carbon itself or methane is usually necessary. Therefore, the high energy stored in the concentrated solar radiation and the chemical energy in the carbon-based material can cooperate to overcome the powerful structure of MgO and to release magnesium in the pure metallic form. The metal produced can create hydrogen gas readily through reaction with water vapor and the hydrogen gas released can be used in the fuel cells for production of electric power. It must be noted that the reaction of magnesium with water vapor is very fast and therefore, this reaction must be performed in a controlled manner and with considering all safety issues. Additionally, the usage of magnesium itself as a direct fuel has been suggested [19,20,21]. This novel idea is supported from two viewpoints: first, the heat of oxidation of magnesium is about ten times that of hydrogen and second, magnesium available in the seawater is enough for 300,000 years of humankind [19]. The direct usage of magnesium can be made in two modes, reaction with oxygen or reaction with water vapor. When metallic magnesium is mixed with oxygen or water vapor, it produces heat that can be used as a source of energy. The reaction of magnesium with water vapor has the benefit of producing hydrogen gas which can be used to provide energy and the magnesium oxide produced can then be converted back into magnesium using, for example, the solar laser and this forms a useful cycle [19]; the water vapor produced from reaction of hydrogen with oxygen in a fuel cell can be used in this cycle, too. All these studies as well as other fascinating properties of magnesium such as low weight (interesting for automobile and aerospace industries) and many others [22] prophesy about the important role that this metallic element will play in the future of our world. These facts reveal the high potentiality of such interesting fields and the importance and usefulness of research in them. Since chemical thermodynamics and chemical kinetics are the fundamental sciences in exploring chemical processes, it is a logical task to study the chemical reactions occurred in these fields thermodynamically and kinetically. One of the useful studies carried out on magnesium oxide is the work of Gálvez et al. [23] in which the role of MgO in a solar thermochemical cycle was investigated both thermodynamically and kinetically. Also, Rongti et al. [24] explored the reaction of MgO with graphite powder kinetically. The purpose of present paper is to perform a mathematical treatment of some significant but untreated points in the study of Gálvez et al. [23] and perhaps many others, more rigorously from standpoints of chemical kinetics (Chemistry) and optimization theory (Mathematics) coupled to each other in a large extent; this coupling, for first time, approaches the problem thoroughly with not relying only on curve-fitting quantities and therefore, it requires more consideration for understanding it. Theoretical treatment in solid state chemical kinetics (SSCK) The reactants in the solid state do not have the ability of moving through matter and therefore, the concentration concept is undefined for them. Even if a diffusion process controls the mechanism of reaction, this diffusion is slow in the solid state. For example, the surface of reaction can goes through body of matter while the reactants are in front and products in behind. As a result, it can be possible to substitute the concentration concept with fraction-of-reaction quantity [25]. The suggested mechanisms in SSCK are very diverse. The nucleation is a typical mechanism while another important one is the diffusion in one or several directions. Totally, there are more than 29 important equations in SSCK which come from different theories [26]. Irrespective of large number of equations presented, it is sometimes possible to fit satisfactorily the experimental data with more than one kinetic equation. Generally, finding correct kinetic equation and in other words the real mechanism had been one of the most difficult tasks in SSCK. From equations suggested, 18 mathematical equations derived from previous scientific researches and represented in Table A1 in Appendix A in Online Resource (Supplementary Material) are used more than others [25,26,27]. For simplicity, the abbreviations suggested by Sharp et al. [28] were used in Table A1 and throughout this study. Because of the nature of reactions in the solid state, the considerable amounts for fraction of reaction can be attained only at high temperatures and hence, the temperature can be considered as a powerful catalyst; this is rooted in the very high activation energies involved in SSCK, originated in turn from lack of collisions in the solid state. Mathematical modeling and raw kinetic data The inviolable science of Mathematics is the inseparable part of natural sciences and specially, the fundamental and strategic science of Chemistry named as Central Science [29]. Like many cases in chemical kinetics, the starting point is the respective differential equation: $$ \frac{da}{dt}= kf(a)\kern0.84em ,\kern0.72em a=a(t)=1-\frac{m(t)}{m(0)}=1-\frac{n(t)}{n(0)} $$ Before determination of quantities in Eq. (1), it should be said that according to the study of Gálvez et al. [23], in first step of a solar thermochemical cycle, MgO is reduced to Mg with assistance of carbonic reducing agent (charcoal or petcoke) and thermal energy from solar radiation and in second step, Mg is converted back to MgO through reaction with water vapor and the hydrogen gas is produced as the desired product in Ref. [23]; the focus of our study was on the first step of this solar thermochemical cycle (Eq. (10)). In Eq. (1), a(t) is the fraction of reduction reaction of MgO to Mg proceeded in time t and is dimensionless, k is the rate coefficient with dimension of time−1, f(a) is a mathematical function taken from Table A1 and without dimension, m(t) is the remaining mass of magnesium oxide in time t and n(t), the number of moles of remaining magnesium oxide in time t. The numerical values of a have been obtained by TGA (thermogravimetric analysis) instrument (TG, Netzsch STA 409 CD) [23]; in fact, instead of solar radiation, Gálvez et al. [23] used the thermal energy produced by thermogravimetric instrument as the heat required for reaction. The basic principle of a thermogravimeter is the measurement of mass change over time as temperature changes (dynamic runs) or is fixed (isothermal runs). For kinetic studies, the isothermal runs are more favored and it is possible to guess kinetic parameters and mechanism from performance of a chemical reaction at different temperatures with temperature treated as a parameter (that is, a fixed temperature for an individual run but different relative to temperatures of other runs). The raw kinetic data used in our study were taken from isothermal runs of Ref. [23]. Usually, other instruments and techniques such as Fourier-transform infrared spectroscopy (FTIR), mass spectrometry, and gas chromatography (GC) are coupled to thermogravimetric operations for analytical purposes; in Ref. [23], GC method was used in combination with thermogravimetric operations. In general, Ref. [23] is a very comprehensive study with various experimental techniques used but only some parts of it is pertinent to the subject of our study. After rearranging the differential equation in Eq. (1) and taking integral from both sides, the function g(a) in Table A1 can be created: $$ \underset{0}{\overset{a}{\int }}\frac{da}{f(a)}=\underset{0}{\overset{t}{\int }} kdt\Rightarrow g(a)= kt $$ In Ref. [23], the D3 model was used for fitting of charcoal curves and the R3 model was utilized for petcoke curves; our paper, however, illustrates the determination of correct kinetic equations more systematically and more mathematically and with assistance of the behavior of experimental kinetic data. Calculation of rate coefficient (k) The rate coefficient k is one of the most important quantities in chemical kinetics and in most cases, including those in this study, has a fixed value by fixing temperature. What should be done is to obtain the rate coefficients at certain temperatures from correct kinetic equations in order for calculating the activation energies (another important quantity) and to compare these energies for selecting the most powerful reducing agent for reduction of metal oxide. The correct kinetic equation is unknown initially and the strategy is to calculate the rate coefficients for 18 equations and the detection of correct kinetic equation by some "mathematical filters" (introduced shortly) which are exerted on the calculated values of k. For calculation of k for any 1 of 18 equations in a fixed temperature, 4 methods can be considered in initial sight but only one optimum method must be selected. After selection of best method, 3 other methods are eliminated from discussions. Then, various mathematical tests, all relevant to selected method and introduced shortly, are presented for selection of best kinetic equation from among 18 mathematical equations; the selected rate coefficient comes from this best equation. It is important not to confuse things; in short, at first, only 1 method is selected from among 4 methods and then, all 18 equations are examined by various mathematical tests based on the selected method to give one selected equation and therefore, one selected rate coefficient. The 4 methods are: To extract the mathematical functions a(t; k) from functions in Table A1 and to fit the experimental data of a versus t to the functions a(t; k) for finding the rate coefficient k. The same as method 1 except that in this method, t is in the role of dependent variable and a in the role of independent one. Since the purpose is to compare 18 equations together and the shift of variables is done in all 18 equations, this shift is of no problem from standpoint of mathematical logics. To apply Eq. (1). In this method, the curve of experimental data of a versus t is plotted and then, the quantity da/dt is calculated in every point of the curve. The calculation of da/dt is possible from two ways. First is to use the numerical algorithms for calculation of da/dt but this method can have appreciable error. Second is to fit as exact as possible the experimental data to a mathematical function compatible to the boundary conditions of experiment. It is not necessary that this function to be from those in Table A1 because this function is only an auxiliary one; in fact, the fundamental properties of this function must be the exactness in fitting and the compatibility to the boundary conditions of experiment. Considering a = 0 in t = 0 and a = 1 in t = ∞ and the particular shape of the experimental data of a versus t, the correct selection and the best choice is a function of the general form given in Eq. (3): $$ a(t)=1-\exp \left(-b{t}^c\right)\kern1.32em s.t.\kern1.2em b>0,c>0. $$ where s.t. denotes "subject to" or "such that". After determination of function a(t), or more precise a(t; b, c), the derivative of this function is calculated and by inserting the experimental data of t in it, the quantity da/dt is calculated in every point of the experimental curve. The quantity f(a) in these points can be calculated by means of experimental data of a and Table A1. Considering Eq. (1), the value of k can be calculated through two routes. In the first route, the curve of da/dt versus f(a) is plotted and the slope of linear fitting function gives the value of k. In the second route, the value of k is determined from Eq. (1) for each datum of a and the average of obtained values of k gives the final value for k. It turned out for our data that two routes give essentially identical value for k; the second route was adopted in this study. The same as method 3 except that in this method, Eq. (2) plays the role. Now, the advantage(s) and disadvantage(s) of these 4 methods are expressed in order to select only one optimal method and to throw away other three methods. About method 1, it should be said that it is not possible to extract a(t; k) for all functions in Table A1. Since the goal is to compare all 18 equations together, the method 1 is not an interesting one. Also, this method relies only on agreement between experimental data and fitting curve and uses only statistical quantities like RSQ (coefficient of determination or R-square or R2), RMSE (root mean squared error), and SSE (sum of squares of error) [30] for finding k. One of the permanent problems in chemical kinetics and especially in SSCK has been the satisfactory agreement between data and fitting curve while this fitting function in fact does not represent the real mechanism of reaction but the apparent agreement is reflected well in statistical quantities of fitting [30]. Additionally, the values of a are small and application of them in statistical quantities, which use them several times, introduces some error. For method 2, it should be said that this way does not have the fault of method 1 for extracting the function t(a; k) from Table A1. However, the other faults of method 1 is existent for method 2. About methods 3 and 4, it should be said that the method 4 is more exact because the mathematical form of function g(a) is definite and analytically obtained and extra steps of method 3 and unwanted resulted errors do not exist. As a result, the method 4 can be taken as the fundamental and principal method and various mathematical tests are derived from it (method 4) for examination of 18 equations; these tests determine the correct kinetic equation and therefore, the correct rate coefficient in an especial fixed temperature. Appendix B in Online Resource describes the usage of method 4 more mathematically and rigorously, but for those uninterested in mathematical details, Appendix B* in Online Resource is essentially the same as Appendix B but without mathematical equations; only the results are mentioned in paper. It is clear that the more powerful and flexible the mathematical programming used, the more precise the huge amounts of calculations and because of this key point, the calculations in this study were performed by several mathematical programs written in the powerful and fast languages of MATLAB and C and combination of them known as MEX Functions. Calculation of activation energy (E a) After detecting the correct kinetic equation and finding the corresponding rate coefficient, it should be noted that all actions performed (mentioned more rigorously in Appendix B in Online Resource) are for reaction at one certain temperature. Therefore, all those long steps must be accomplished also for other temperatures. After obtaining the values of k for all temperatures of reaction, the activation energy of reaction can be calculated. In some studies, the well-known equation of Arrhenius has been used [23, 31,32,33]: $$ k(T)=A\exp \left(-{E}_a/ RT\right) $$ T, A, and R are thermodynamic temperature, pre-exponential factor and gases constant, respectively. In general, the calculation of activation energy by non-linear regression from non-linear Eq. (4) is more exact relative to linear regression of logarithmic form of Eq. (4); the reason has to do with the statistical weights and residuals of data and is a detailed statistical discussion. The Arrhenius equation is a form of Hood equation [34]: $$ k(T)=A\exp \left(-B/T\right) $$ where B is a constant. The questionable point in Eq. (4) is the independency of activation energy from temperature. Since on one hand, the reactions in SSCK are particular reactions and occurred at high temperatures and on the other hand, it is completely possible to obtain the values of k at different temperatures from different kinetic equations (because of different mechanisms), it is a logical action to take into account the possibility of change of activation energy with temperature. Thus, the original form of Arrhenius equation can be a more exact definition of activation energy [35]: $$ {E}_a(T)={RT}^2\frac{d\ln k}{dT} $$ Using Eq. (6) analytically is more exact relative to numerical usage of it and this requires the function k(T) to be introduced. The function k(T) can be found by curve fitting of obtained values of k at different temperatures but the general mathematical form of function k(T) should be selected carefully. One of the proposed forms is as [34,35,36]. $$ k(T)={AT}^s\exp \left(-{E}_a/ RT\right) $$ in which, s is usually a small positive integer related to statistical thermodynamic discussions [35]. Equation (7) is a generalized form of Arrhenius equation and acts more exact in some cases. It will be more exact to take Ea/R in Eq. (7) as a constant like C and to utilize Eq. (6) to obtain activation energy as Ea = CR + sRT. Considering these points, the perfect mathematical form of Eq. (8) can be a suitable choice and was utilized in this study: $$ k(T)={AT}^B\exp \left(-C/{T}^D\right)\kern1.2em s.t.\kern1.2em \mathrm{A}>0,\mathrm{B}\ge 0,\mathrm{C}>0,\mathrm{D}>0. $$ Since Eq. (8) contains 4 parameters, the number of data points (temperatures) needed for determination of 4 parameters from fitting should be at least 4 and therefore, for cases like molar ratio 1:2 with only 3 data points (introduced shortly), D was taken as 1. Combining Eq. (6) and Eq. (8), the corresponding activation energy can be calculated as $$ {E}_a(T)= RT\left(B+\frac{CD}{T^D}\right) $$ Reactions of MgO with carbonic reductants The general form of reaction of MgO with carbonic reductants at high temperatures can be shown as $$ Mg O(s)+C(s)\to Mg(g)+ CO(g) $$ Molar ratio of MgO to reductant in all reactions performed was 1:1 or 1:2 [23]. The trend of calculations is described in details for one reaction in a certain molar ratio and only the results are presented for other cases. Reaction of MgO with charcoal in molar ratio 1:1 at T = 1723.15 K The 18 mathematical tests exerted on 18 different kinetic equations in order to clarify the correct one can be collected in a table in which, the rows represent 18 tests and the columns are abbreviations for kinetic equations selected by first 15 tests. Table 1 is an illustration and the quantities with number 1 are for curves of g(a) versus t, the quantities with number 2 for curves of k versus t and the quantities with number 3 for curves of t versus a. Table 1 The results of applying different tests to the different kinetic equations for determination of average k for reaction of MgO with charcoal in molar ratio 1:1 and at temperature 1723.15 K As can be seen in Table 1, from first 15 tests, only 3 tests confirm a mechanism other than D4. Considering the power of tests, it is obvious that the most logical selection for correct kinetic equation is D4. Therefore, the rate coefficient is taken from equation D4 and two other equations (D3 and F3) are deleted for this case. The values of k for equations D3, D4 and F3 as well as their optimized values are presented in Table 2. It is useful to mention this point that F mechanisms have been borrowed from other branches of chemical kinetics in which, contrary to SSCK, the concepts of concentration and collision are meaningful. Therefore, F mechanisms are in a lower grade compared to other mechanisms of SSCK. The defeat in 13 tests and the better fitting by respective t(a; k) reflected in RSQ (3), RMSE (3), and SSE (3) in Table 1 for F3 equation is in fact a clear example of wrong conclusions based on only the statistical quantities of fitting like RSQ; as can be seen in Table 2, the value of k for F3 model is about two orders of magnitude larger than that of D4 model and this affects the calculation of activation energy significantly. In fact, the erroneous calculation of activation energy can be very detrimental with respect to some critical decisions in industry such as identification of the bottleneck of process. Figure 1a illustrates the experimental data as well as final selected equations for charcoal in molar ratio 1:1 at different temperatures; except for 1823.15 K, all other temperatures follow D4 equation. Fig. 1b shows the experimental data fitted by Eq. (3) for these cases; this equation lacks the fundamental discussions about mechanism and emphasizes only on fitting and hence, cannot be used for calculation of activation energy; Appendix C in Online Resource shows an interesting usage of Eq. (3) regarding the estimation of 90% of reaction time. Table 3 reports the abbreviations of selected kinetic equations and also the corresponding optimized values of k for reactions of MgO with carbonic reductants in different molar ratios at different temperatures. In the study of Gálvez et al. [23], the D3 equation was used for all charcoal cases and the R3 equation for all petcoke cases; RSQ and RMSE were the important criteria for selection of these models in Ref. [23]. Table 2 The values of average k obtained from equations D3, D4, and F3 and the corresponding optimized values for reaction of MgO with charcoal in molar ratio 1:1 and at temperature 1723.15 K a The experimental data [23] as well as final selected equations for charcoal in molar ratio 1:1 at different temperatures. b The experimental data [23] fitted by Eq. (3) for charcoal in molar ratio 1:1 at different temperatures Table 3 The abbreviations of selected kinetic equations and also the corresponding optimized values of k/min−1 for reactions of MgO with carbonic reductants in different molar ratios and at different temperatures In general, the unsatisfactory fitting by some selected kinetic equations results from the lack of enough evolution of theories in SSCK and this problem is considered as the largest challenge in SSCK. In fact, it can be said that among various fields of chemical kinetics, SSCK has the most amount of ambiguity and obscurity and the elucidation of its mysteries requires the more attention, time and cost, relative to what is in action at present. The mechanisms of chemical reactions have been always a very challenging subject in Chemistry, and this difficulty is even more severe in the solid state. As can be seen in Table 3, except for charcoal in molar ratio 1:1 and at temperature 1823.15 K that obeys D3 mechanism, other cases follow D4 mechanism. In all, it is inferred that the diffusion is the rate-determining step in the processes involved in this study. Since the boiling temperature of Mg metal is about 1363 K and considering the higher temperatures studied in this paper, the produced magnesium in Eq. (10) will be as gaseous and hence, it can be said that the products of reaction exit from reaction zone and thus, the rate-determining step of diffusion concerns probably only the reactants. The significant point is that the melting point of a solid has inverse relation with diffusion [37] and hence, the high melting points of reactants in Eq. (10) are compatible with the conclusion that diffusion is the rate-determining step. Since D4 is an extension to D3 model, the numerical values of rate coefficients for D3 and D4 models in the identical conditions are not very different and this fact was observed in this study frequently; Table 2 contains one instance. Therefore, it can be said that the D4 equation can be taken as the fundamental kinetic model for describing the kinetic behavior of reaction of MgO with charcoal and with petcoke, each in molar ratios 1:1 and 1:2 and at different temperatures. The derivation of mathematical equation for D4 mechanism using first Fick law is discussed in detail in Ref. [27]. The subtle and significant point in Table 3 is the very small values of rate coefficients for reactions in the solid state and especially, for reaction of MgO with carbonic materials according to Eq. (10), in spite of very large temperatures involved. In fact, it can be said that if Eq. (10) is a part of a chemical process in a facility, it will be a limiting step in entire process. This limiting property is highlighted when comparison with the reactions, for example, in solution; for instance, the reaction of C2H5I with OH¯ in ethanol has a rate coefficient of 0.119 L mol−1 s−1 at 363.8 K [35], where L denotes liter. This limiting nature of reactions in the solid state will be reflected at high values for activation energy, discussed shortly. Activation energies at different conditions The calculation of activation energies requires a good fitting of obtained values of k. Figure 2 shows the final values of k from Table 3 as well as curves of k(T) obtained from fitting by Eq. (8); in figures of this study, C11 is applied to charcoal 1:1, C12 to charcoal 1:2, P11 to petcoke 1:1 and P12 to petcoke 1:2. The parameters obtained from fitting and the relevant statistical quantities of fitting [30] are reported in Table 4. The interesting point in Table 4 is that the values of parameter D for charcoal 1:1 and petcoke 1:1 are practically 1; as pointed out below Eq. (8), because of 5 values of k for 5 temperatures for molar ratio 1:1 for both charcoal and petcoke, D can be set as fourth parameter in 1:1 cases (unlike molar ratio 1:2 with D equal to 1 from beginning). Because of low number of points in the case of molar ratio 1:2, the corresponding fittings in Fig. 2 and the resulted activation energies are not very reliable; in fact, because of equality of number of parameters and number of experimental data points in molar ratio 1:2, the quantity of RMSE is infinity, as can be seen in Table 4 and verifiable from Eq. (B7) in Appendix B in Online Resource. In general, the more the number of experimental data, the more precise the calculations. The final values of k as well as curves of k(T) for reactions of MgO with carbonic reductants in different molar ratios and at temperatures 1723.15 K, 1748.15 K, 1773.15 K, 1798.15 K, and 1823.15 K Table 4 The parameters obtained from fitting of final values of k by Eq. (8) and the relevant statistical quantities of fitting [30] for reactions of MgO with carbonic reductants in different molar ratios and at temperatures 1723.15 K, 1748.15 K, 1773.15 K, 1798.15 K, and 1823.15 K The values of Ea at different temperatures, the values of SD, RSD and range for Ea, the mean value of Ea, Ea obtained from non-linear regression of Arrhenius equation, Ea obtained from linear regression of Arrhenius equation (using logarithmic linear form as in Ref. [23]) and Ea from Ref. [23] are all holus-bolus collected in Table 5. The reasons of equal Ea for each of molar ratio 1:2 are the essentially zero value of B and exact value of 1 for D that according to Eq. (9), cause this property. Table 5 The comparison of values of Ea/(kJ mol−1) obtained from different methods as well as the relevant statistical quantities for reactions of MgO with carbonic reductants in different molar ratios and at temperatures 1723.15 K, 1748.15 K, 1773.15 K, 1798.15 K, and 1823.15 K In Ref. [23], Ea was obtained only for 1:1 molar ratio for both reducing agents and 1:2 molar ratio was not treated in this regard; according to Table 5, contrary to Ref. [23], Ea for 1:1 molar ratio for petcoke case is larger than that of charcoal case; introduced shortly, the quantitative validation of method using experimental data will clarify that our treatment gives more real values for Ea, relative to treatment of Re [23].. It is also noteworthy that in Table 5, the values of Ea for an individual carbonic reductant but in different molar ratios are different; more has been said about this difference in Appendix D in Online Resource. In the study of Rongti et al. [24], only partly similar to our work, they examined the reaction between MgO and graphite powders in a dynamic (non-isothermal) manner in the temperature range 293 to 1973 K and obtained the average activation energy equal to 208.29 kJ mol−1 for 0.1 < a < 0.25 and 374.13 kJ mol−1 for 0.25 < a < 0.5; these values were found for molar ratio 1:2 (MgO:C); they also calculated the activation energy using transition state theory (TST) as 470 kJ mol−1 and introduced a chemical mechanism but did not discuss the kinetic equations. In a study by Shu et al. [38], the isothermal reduction of Scheelite (CaWO4) by silicon (Si) from 1423 to 1523 K was accomplished from a kinetic viewpoint and it turned out that both D3 and D4 models can describe the kinetics of reaction well; Arrhenius equation (in logarithmic linear form) was used for calculation of Ea and the values of 379.93 and 387.16 kJ mol−1 were obtained for D3 and D4 models, respectively. One point clearly observable in Table 5 is the high values of activation energy for reactions in the solid state, relative to those in other phases; for example, the reaction of C2H5I with OH¯ in ethanol has Ea equal to 90.3744 kJ mol−1, obtained from Arrhenius equation in logarithmic linear form [35]. In fact, the highness for values of Ea in Table 5 is rooted in the lowness of k values in Table 3. Consequently, it can be inferred that the reaction of MgO with carbonic materials (Eq. (10)) can be considered as the limiting step in a process. Quantitative validation of method using experimental data Because of significance of activation energy in the design of chemical processes in industry and for suitable treatment of slow reactions (chemical bottlenecks), the comparison of Ea values is of critical importance. One of the important results obtained from Fig. 2 and Table 5 is that, contrary to Ref. [23], the values of average Ea in both molar ratios are larger for petcoke than those for charcoal. In this section, the quantitative validation of adopted method in present paper is introduced to show the validity of conclusions about activation energy; before that, it will be appropriate to mention a related subtle point regarding change of rate coefficient with temperature. In general, for finding the amount of sensitivity of a reaction to temperature change, the quantity of dk/dT is a good choice. Rewriting Eq. (6) as Eq. (11), $$ \frac{dk}{dT}=\frac{kE_a(T)}{RT^2} $$ it is obvious that both k and Ea appear as effective factors but usually, it is the quantity of dlnk/dT that is used in this regard. The point is that dlnk = dk/k is, in fact, the limit of Δk/k when ΔT → 0 and the fraction of Δk/k is a more suitable quantity for investigation of sensitivity compared to Δk. Therefore, Eq. (6) is more appropriate compared to Eq. (11) for evaluation of sensitivity to temperature. Rearranging Eq. (6) as Eq. (12), $$ \frac{d\ln k}{dT}=\frac{E_a(T)}{RT^2} $$ and it is obvious that k does not appear as a contributing factor in right hand side of Eq. (12). Therefore, according to Eq. (12), the larger the activation energy for a reaction, the higher the sensitivity of that reaction to temperature; this sensitivity approaches zero when T → ∞ and approaches infinity when T → 0. Considering Eq. (12) and Table 5, petcoke with molar ratio 1:1 will have the most sensitivity to temperature. Figure 3a, b, practically identical but drawn in the different styles, confirms this conclusion. If the number of data would be more and equal for all cases, the sensitivity of petcoke to temperature would flaunt even more. In Fig. 2, it is also obvious that the numerical values of k and therefore, the corresponding fitting curve for petcoke 1:1 are lower than those of charcoal 1:1 at not very high temperatures but are higher at very high temperatures; the same situation is observed for numerical values of k for petcoke 1:2 relative to charcoal 1:2; the reason of not observing this trend for corresponding fitting curve of petcoke 1:2 relative to charcoal 1:2 is the low number of data points and high curvature of fitting function required for petcoke 1:2, leading to disability of fitting method in giving a good fit. a The experimental data [23] compared with each other for reactions of MgO with carbonic reductants in different molar ratios and at different temperatures. b Other style of plot for data in a More quantitatively and in principle, the areas under experimental data in Fig. 3a, b can be calculated numerically and compared; this comparison would be more accurate if initial and final times would be the same for all experimental runs. Notwithstanding, for an approximate exploration, these areas were calculated for charcoal 1:1 and petcoke 1:1 at temperature interval T = 1773.15 K to T = 1823.15 K with the initial point taken in 40 min and the final point in 348 min; the reason of such selections was the relatively good coverage of experimental data, as can be seen in Fig. 3a, b. Figure 4a shows this quantitative comparison of AUC, the area under curve, obtained by numerical integration using a program written in MATLAB; in accordance with Eq. (12) and following the "overtaking" behavior observed in Figs. 2 and 3a, b, petcoke 1:1 is more sensitive to temperature change and this sensitivity is especially large in lower temperatures. Also, by noting Fig. 3a, b, it can be said that the temperature T = 1773.15 K is a "jumping temperature" for both charcoal 1:1 and petcoke 1:1 that "excites" the reactants for occurrence of chemical reaction; this behavior does not mean that the reaction must be performed at lower temperatures and the point is that the lower the temperature, the more the sensitivity to temperature change. Figure 4b shows this behavior better by plotting $$ \frac{\Delta AUC}{AUC_1}=\frac{\left({AUC}_2-{AUC}_1\right)}{AUC_1} $$ a The comparison of quantity AUC of experimental data for reactions of MgO with charcoal and petcoke in molar ratio 1:1 and at different temperatures. b The comparison of quantity ΔAUC/AUC1 of experimental data for reactions of MgO with charcoal and petcoke in molar ratio 1:1 and at different but isolength temperature subintervals where AUC1 and AUC2 are the areas corresponding to the first and second temperatures in each subinterval in Fig. 4b, respectively; for example, for first temperature interval in Fig. 4b, ΔAUC = AUC(T = 1798.15 K) – AUC(T = 1773.15K). Also, the elapse of enough time for a reaction is an important factor in achieving the desired result from that reaction. Therefore, totally, from industry standpoint and in a reasonable amount of reaction time, charcoal with excess molar ratio is the recommended selection for reducing agent at not very high temperatures. At very high temperatures, petcoke with extra molar ratio is a suitable choice and even perhaps, more suitable than charcoal with extra molar ratio, especially if it is possible to provide long time for reaction. It must be noted that the difference in the behavior of carbonic reducing agents reported in this study is itself a detailed scientific discussion and is relevant to the different microstructural properties of different forms of carbon [39, 40]. In end, it can be said that, like many other reactions in the solid state, diffusion is the dominant process controlling the rate of reactions in this study, especially that the reactants have high melting points. Also, it turned out that the D4 kinetic equation is a powerful mathematical model for kinetic treatment of reactions in the solid state including reactions of this study; Appendices E and F in Online Resource discuss D4 model more thoroughly and Refs [41,42,43,44,45,46,47,48,49,50,51,52,53,54]. are related to Appendix F which addresses the usefulness of D4 equation in analyzing various reactions in Chemistry. In this study, a new and detailed combination of mathematical modeling and discrete numerical optimization theory was used by presenting 18 mathematical filters not relying only on statistical quantities of fitting. The purpose of these filters was the determination of correct kinetic equation and therefore, the corresponding rate coefficient from among 18 equations most used at present in solid state chemical kinetics. Then, the obtained values of rate coefficient were fitted by a new fundamental mathematical function. Using the general definition of activation energy and the determined function for rate coefficients, the activation energy was introduced as a function of temperature. The comparison with previous study was performed and it became clear that, contrary to it, the values of average Ea for petcoke are larger than those for charcoal in both molar ratios and besides, the values of Ea for an individual carbonic reductant but in different molar ratios are different. Also, the areas under experimental data were calculated and utilized for method validation and comparison. In all, with assistance of experimental data and mathematical discussions, it became clear that for reduction of magnesium oxide to metallic magnesium in a reasonable amount of reaction time, at not very high temperatures, charcoal with excess molar ratio is the most beneficial choice industrially but at very high temperatures, petcoke with extra molar ratio is also suitable and even perhaps, more suitable than charcoal with extra molar ratio, especially if it is possible to provide long time for reaction. Also, it can be said that, like many other reactions in solid state, diffusion is the dominant process controlling the rate of reactions in this study, especially that the reactants have high melting points. Besides, it turned out that the D4 kinetic equation is a powerful mathematical model for kinetic treatment of reactions in the solid state including reactions of this study. SSCK: Solid State Chemical Kinetics TGA: Thermogravimetric Analysis FTIR: Fourier-transform Infrared Spectroscopy RSQ (or R-square or R 2): Coefficient of Determination RMSE: Root Mean Squared Error SSE: Sum of Squares of Error RSD: Relative Standard Deviation MATLAB: Matrix Laboratory D4: Ginstling-Brounshtein Diffusion Model (the complete list including D3, R3 and …, has been collected in Table A1 in Appendix 1 in Online Resource) C11: Charcoal in molar ratio 1:1 Petcoke: Petroleum Coke P11: Petcoke in molar ratio 1:1 NL_Arr: Non-linear Arrhenius L_Arr: Linear Arrhenius Area Under Curve House JE (2013) Inorganic chemistry, second edn. Academic Press, Waltham, USA Kokabian B, Ghimire U, Gude VG (2018) Water deionization with renewable energy production in microalgae - microbial desalination process. Renew Energy 122:354–361. https://doi.org/10.1016/j.renene.2018.01.061 Nikparsa P, Rauch R, Mirzaei AA (2017) A hybrid of winddiesel technology with biomass-based Fischer–Tropsch synthesis. Monatshefte für Chemie - Chemical Monthly 148(10):1877–1886. https://doi.org/10.1007/s00706-017-1998-5 Astaneh M, Roshandel R, Dufo-López R, Bernal-Agustín JL (2018) A novel framework for optimization of size and control strategy of lithium-ion battery based off-grid renewable energy systems. Energy Convers Manag 175:99–111. https://doi.org/10.1016/j.enconman.2018.08.107 Shin H, Ellinger AE, Nolan HH, DeCoster TD, Lane F (2016) An assessment of the association between renewable energy utilization and firm financial performance. J Bus Ethics 151(4):1121–1138. https://doi.org/10.1007/s10551-016-3249-9 Zhang S, Andrews-Speed P, Li S (2018) To what extent will Chinas ongoing electricity market reforms assist the integration of renewable energy? Energy Policy 114:165–172. https://doi.org/10.1016/j.enpol.2017.12.002 Yazdani M, Chatterjee P, Zavadskas EK, Streimikiene D (2018) A novel integrated decision-making approach for the evaluation and selection of renewable energy technologies. Clean Techn Environ Policy 20(2):403–420. https://doi.org/10.1007/s10098-018-1488-4 Lojpur V, Krstić J, Kačarević-Popović Z, Filipović N, Validžić IL (2018) Flexible and high-efficiency Sb2S3/solid carrier solar cell at low light intensity. Environ Chem Lett 16(2):659–664. https://doi.org/10.1007/s10311-017-0702-7 Henry J, Daniel T, Balasubramanian V, Mohanraj K, Sivakumar G (2020) Enhanced photosensitivity of Bi-doped Cu2Se thin films prepared by chemical synthesis for solar cell application. Iranian Journal of Science and Technology, Transactions A: Science 44(5):1369–1377. https://doi.org/10.1007/s40995-020-00949-6 Hameed AMA (2018) Efficient synthesis of pyrano[2,3-b]pyridine derivatives using microwave or solar energy. Environ Chem Lett 16(4):1423–1427. https://doi.org/10.1007/s10311-018-0744-5 Akpinar EK (2019) The effects of some exergetic indicators on the performance of thin layer drying process of long green pepper in a solar dryer. Heat Mass Transf 55(2):299–308. https://doi.org/10.1007/s00231-018-2415-2 Yousaf Hameed K, Faisal B, Hanae T, Bernabé Marí S, Saira B, Naveed Ali Kaim K (2019) Modelling of novel-structured copper barium tin sulphide thin film solar cells. Bull Mater Sci 42(5):231. https://doi.org/10.1007/s12034-019-1919-9 Najafi-Ghalelou A, Nojavan S, Zare K (2018) Information gap decision theory-based risk-constrained scheduling of smart home energy consumption in the presence of solar thermal storage system. Sol Energy 163:271–287. https://doi.org/10.1016/j.solener.2018.02.013 Gimpel T, Winter S, Boßmeyer M, Schade W (2018) Quantum efficiency of femtosecond-laser sulfur hyperdoped silicon solar cells after different annealing regimes. Sol Energy Mater Sol Cells 180:168–172. https://doi.org/10.1016/j.solmat.2018.03.001 Matin MA, Rhaman MM, Hakim MA, Islam MF (2020) Bi(1−y)SmyFeO3 as prospective photovoltaic materials. Bull Mater Sci 43(1):167. https://doi.org/10.1007/s12034-020-02118-2 Lashgari M, Zeinalkhani P (2018) Ammonia photosynthesis under ambient conditions using an efficient nanostructured FeS2 /CNT solar-energy-material with water feedstock and nitrogen gas. Nano Energy 48:361–368. https://doi.org/10.1016/j.nanoen.2018.03.079 Dorouzi M, Mortezapour H, Akhavan H-R, Moghaddam AG (2018) Tomato slices drying in a liquid desiccant-assisted solar dryer coupled with a photovoltaic-thermal regeneration system. Sol Energy 162:364–371. https://doi.org/10.1016/j.solener.2018.01.025 Cornaglia LM, Lombardo EA (2018) Pure hydrogen production for low temperature fuel cells. Catal Lett 148(4):1015–1026. https://doi.org/10.1007/s10562-018-2309-4 White-hot energy - Magnesium power - The Economist. https://www.economist.com/technology-quarterly/2010/04/19/white-hot-energy. Accessed 8 October 2018 Hahn R, Mainert J, Glaw F, Lang K-D (2015) Sea water magnesium fuel cell power supply. J Power Sources 288:26–35. https://doi.org/10.1016/j.jpowsour.2015.04.119 Hasvold Ø, Lian T, Haakaas E, Størkersen N, Perelman O, Cordier S (2004) CLIPPER: a long-range, autonomous underwater vehicle using magnesium fuel and oxygen from the sea. J Power Sources 136(2):232–239. https://doi.org/10.1016/j.jpowsour.2004.03.023 Mordike B, Ebert T (2001) Magnesium: Properties — applications — potential. Mater Sci Eng A 302(1):37–45. https://doi.org/10.1016/S0921-5093(00)01351-4 Gálvez M, Frei A, Albisetti G et al (2008) Solar hydrogen production via a two-step thermochemical process based on MgO/Mg redox reactions—Thermodynamic and kinetic analyses. Int J Hydrog Energy 33(12):2880–2890. https://doi.org/10.1016/j.ijhydene.2008.04.007 Rongti L, Wei P, Sano M (2003) Kinetics and mechanism of carbothermic reduction of magnesia. Metall Mater Trans B 34(4):433–437. https://doi.org/10.1007/s11663-003-0069-y Gaisford S, Kett V, Haines P (2016) Principles of thermal analysis and calorimetry, second edn. Royal Society of Chemistry, Cambridge Dickinson C, Heal G (1999) Solid–liquid diffusion controlled rate equations. Thermochim Acta 340-341:89–103. https://doi.org/10.1016/S0040-6031(99)00256-7 Khawam A, Flanagan DR (2006) Solid-state kinetic models: basics and mathematical fundamentals. J Phys Chem B 110(35):17315–17328. https://doi.org/10.1021/jp062746a Sharp JH, Brindley GW, Achar BNN (1966) Numerical data for some commonly used solid state reaction equations. J Am Ceram Soc 49(7):379–382. https://doi.org/10.1111/j.1151-2916.1966.tb13289.x Balaban AT, Klein DJ (2006) Is chemistry the central science? How are different sciences related? Co-citations, reductionism, emergence, and posets. Scientometrics 69(3):615–637. https://doi.org/10.1007/s11192-006-0173-2 Curve Fitting Toolbox User's Guide - MATLAB - MathWorks. http://www.mathworks.com/help/pdf_doc/curvefit/curvefit.pdf. Accessed 5 March 2018 Czyrski A, Hermann T, Smoląg A (2011) Optimization of a new isoquinoline derivative preparation. React Kinet Mech Catal 104(1):173–180. https://doi.org/10.1007/s11144-011-0331-2 Tétényi P, Tellinger O (2010) Interaction affinity of nickel promoted molybdena alumina with C, H and S in some catalytic conversions. React Kinet Mech Catal 99:99–109. https://doi.org/10.1007/s11144-009-0104-3 Saheb V (2013) Theoretical studies on the kinetics and mechanism of the reaction of atomic hydrogen with carbon dioxide. Kinet Catal 54(6):671–676. https://doi.org/10.1134/S0023158413060104 Upadhyay SK (2006) Chemical kinetics and reaction dynamics. Springer, Berlin Levine IN (2009) Physical chemistry, McGraw-Hill Higher Education, New York House JE (2007) Principles of chemical kinetics,. Academic Press, Burlington, USA Tiwari GP, Mehrotra RS (2008) Diffusion and melting. Defect and Diffusion Forum 279:23–37 Shu Q, Wu J, Chou K (2015) Kinetics Study on Reduction of CaWO4 by Si from 1423 K to 1523 K. High Temperature Materials and Processes 34(8):805–811. https://doi.org/10.1515/htmp-2014-0161 Gálvez ME, Hischier I, Frei A, Steinfeld A (2008) Ammonia production via a two-step Al2O3/AlN thermochemical cycle. 3. Influence of the Carbon Reducing Agent and Cyclability. Ind Eng Chem Res 47(7):2231–2237. https://doi.org/10.1021/ie071244w Gálvez M, Lázaro M, Moliner R (2005) Novel activated carbon-based catalyst for the selective catalytic reduction of nitrogen oxide. Catal Today 102-103:142–147. https://doi.org/10.1016/j.cattod.2005.02.020 Alizadehhesari K, Golding SD, Bhatia SK (2012) Kinetics of the Dehydroxylation of Serpentine. Energy Fuel 26(2):783–790. https://doi.org/10.1021/ef201360b Khaki JV, Shalchian H, Rafsanjani-Abbasi A, Alavifard N (2018) Recovery of iron from a high-sulfur and low-grade iron ore. Thermochim Acta 662:47–54. https://doi.org/10.1016/j.tca.2018.02.010 Yorulmaz SY, Atimtay AT (2009) Investigation of combustion kinetics of treated and untreated waste wood samples with thermogravimetric analysis. Fuel Process Technol 90(7-8):939–946. https://doi.org/10.1016/j.fuproc.2009.02.010 Lepage D, Sobh F, Kuss C, Liang G, Schougaard S (2014) Delithiation kinetics study of carbon coated and carbon free LiFePO4. J Power Sources 256:61–65. https://doi.org/10.1016/j.jpowsour.2013.12.054 Zhang J-L, Xing X-D, Cao M-M, Jiao K-X, Wang C-L, Ren S (2013) Reduction kinetics of vanadium titano-magnetite carbon composite pellets adding catalysts under high temperature. J Iron Steel Res Int 20(2):1–7. https://doi.org/10.1016/S1006-706X(13)60048-5 Bai Y, Wu C, Wu F, Yang J-H, Zhao L-L, Long F, Yi B-L (2012) Thermal decomposition kinetics of light-weight composite NaNH2–NaBH4 hydrogen storage materials for fuel cells. Int J Hydrog Energy 37(17):12973–12979. https://doi.org/10.1016/j.ijhydene.2012.05.069 Naktiyok J, Bayrakçeken H, Özer AK, Gülaboğlu MŞ (2017) Investigation of combustion kinetics of Umutbaca-lignite by thermal analysis technique. J Therm Anal Calorim 129(1):531–539. https://doi.org/10.1007/s10973-017-6149-z Panic S, Kiss E, Boskovic G (2015) Thermal decomposition kinetics of carbon nanotubes–the application of model-fitting methods. React Kinet Mech Catal 115(1):93–104. https://doi.org/10.1007/s11144-014-0828-6 Xu T, Huang X (2010) Pyrolysis properties and kinetic model of an asphalt binder containing a flame retardant. J Appl Polym Sci 119(5):2661–2665. https://doi.org/10.1002/app.32841 Krauklis AE, Dreyer I (2018) A simplistic preliminary assessment of Ginstling-Brounstein model for solid spherical particles in the context of a diffusion-controlled synthesis. Open Chemistry 16(1):64–72. https://doi.org/10.1515/chem-2018-0011 Pang Y, Li Q (2016) A review on kinetic models and corresponding analysis methods for hydrogen storage materials. Int J Hydrog Energy 41(40):18072–18087. https://doi.org/10.1016/j.ijhydene.2016.08.018 Provis JL (2016) On the use of the Jander equation in cement hydration modelling. RILEM Technical Letters 1:62–66. https://doi.org/10.21809/rilemtechlett.2016.13 Chou K-C, Hou X-M (2009) Kinetics of high-temperature oxidation of inorganic nonmetallic materials. J Am Ceram Soc 92(3):585–594. https://doi.org/10.1111/j.1551-2916.2008.02903.x Lv W, Lv X, Lv X, Xiang J, Bai C, Song B (2018) Non-isothermal kinetic studies on the carbothermic reduction of Panzhihua ilmenite concentrate. Miner Process Ext Metall:1–9 The authors would like to thank their respective institutes. Department of Physical Chemistry, Faculty of Chemistry, Ferdowsi University of Mashhad, Mashhad, Iran Hamid Zahedi Department of Physical Chemistry, Faculty of Chemistry, University of Isfahan, Isfahan, Iran Nahid Farzi Department of Pure Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares University, Tehran, Iran Nasser Golestani The suggestion of subject and methods of this study, the mathematical programming needed for calculations in C and MATLAB languages as well as writing the manuscript were performed by HZ. The mathematical methods were analyzed rigorously and approved by NG. The manuscript was checked by NF comprehensively. All authors read and approved the final manuscript. Correspondence to Hamid Zahedi. Table A1 18 principle equations in the solid state chemical kinetics. Table C1 The values of b, c, t0.9 and statistical quantities of fitting [30] obtained from Eq. (3) (in paper) and Eq. (C1) and the breaking of t0.9 to common units of time for reaction of MgO with charcoal in molar ratio 1:1 and at temperature 1723.15 K. Table F1 The diversity of chemical reactions in the solid state supported by D4 kinetic model Zahedi, H., Farzi, N. & Golestani, N. Systematic kinetic study of magnesium production using magnesium oxide and carbonic materials at different temperatures. J. Eng. Appl. Sci. 68, 30 (2021). https://doi.org/10.1186/s44147-021-00043-7 Optimization Theory Correct Kinetic Equation
CommonCrawl
Applied Water Science June 2012 , Volume 2, Issue 2, pp 135–141 | Cite as Insight into biosorption equilibrium, kinetics and thermodynamics of crystal violet onto Ananas comosus (pineapple) leaf powder Sagnik Chakraborty Shamik Chowdhury Papita Das Saha Short Research Communication First Online: 21 February 2012 Biosorption performance of pineapple leaf powder (PLP) for removal of crystal violet (CV) from its aqueous solutions was investigated. To this end, the influence of operational parameters such as pH, biosorbent dose, initial dye concentration and temperature were studied employing a batch experimental setup. The biosorption process followed the Langmuir isotherm model with high correlation coefficients (R2 > 0.99) at different temperatures. The maximum monolayer biosorption capacity was found to be 78.22 mg g−1 at 293 K. The kinetic data conformed to the pseudo-second-order kinetic model. The activation energy of the system was calculated as 58.96 kJ mol−1, indicating chemisorption nature of the ongoing biosorption process. A thermodynamic study showed spontaneous and exothermic nature of the biosorption process. Owing to its low cost and high dye uptake capacity, PLP has potential for application as biosorbent for removal of CV from aqueous solutions. Biosorption Pineapple leaf powder Crystal violet Equilibrium Kinetics Thermodynamics The release of synthetic dye stuffs through the wastewater streams of industries such as textile, leather, rubber, paper, printing, paint, plastic, pigments, food and cosmetics is a serious global concern (Chowdhury et al. 2011a). This is mainly because of their negative ecotoxicological effects into the receiving water bodies and bioaccumulation in wildlife (Saha et al. 2010). Therefore, such dye effluent stream requires proper treatment prior to discharge. In recent years, biosorption has been strongly recommended by researchers worldwide as an efficient and economically sustainable technology for the removal of synthetic dyes from industrial effluents (Farooq et al. 2010; Rafatullah et al. 2010; Demribas 2009; Gupta and Suhas 2009). A number of non-conventional low-cost materials, particularly agricultural waste/by-products such as rice husk, rice bran, wheat bran, orange peel, banana pith, banana peel, plum kernels, apple pomace, wheat straw, sawdust, coir pith, sugarcane bagasse, tea leaves, bamboo dust etc. have been proposed by several workers as effective biosorbents for the removal of dyes from their aqueous solutions (Gupta and Suhas 2009; Chowdhury et al. 2010). Pineapple (Ananas comosus) is largely cultivated in tropical countries like India, China, Thailand, Indonesia and Taiwan and is consumed worldwide (Chowdhury et al. 2011b). Upon harvest, the leaves and stem cause potential disposal problems since they exist in enormous quantities and have no practical utility. Although direct open burning in fields is a common option for disposal, but this alternative causes serious air pollution. Thus the use of pineapple wastes as biosorbent is an attractive alternative from both economical and environmental point of view. Our previous study demonstrates that pineapple leaves in powdered form could be employed as an effective biosorbent for the removal of Basic Green 4 from aqueous solutions (Chowdhury et al. 2011b). Hence, a further attempt of the feasibility of applying pineapple leaf powder (PLP) for the removal of crystal violet (CV) dye from aqueous solution was explored in the present study. The study includes an evaluation of the effects of various operational parameters such as initial dye concentration, biosorbent dose, temperature and pH on the dye biosorption process employing a batch experimental setup. Biosorption isotherms and kinetics of the sorption process were studied. Also, thermodynamic and activation parameters were calculated in order to estimate the performance and predict the mechanism of the biosorption process. Biosorbent Mature pineapple leaves were collected from the local farmlands in Durgapur, West Bengal, India. The leaves were first thoroughly washed with tap water to remove dust, dirt and any unwanted particles. The leaves were then sun dried and subsequently oven dried at 363 ± 1 K for 24 h. The dried leaves were ground to fine powder using a grinder and sieved to a constant size (100–125 μm) and used as biosorbent without any pretreatment for CV biosorption. The characterization of the biosorbent has been previously reported (Chowdhury et al. 2011b). Crystal violet (CV) used in this study was of commercial quality (CI 42555, MF: C25H30N3Cl, MW: 408, λmax: 580 nm) and was used without further purification. Stock solution (1,000 mg L−1) was prepared by dissolving accurately weighed quantity of the dye in double-distilled water. Experimental dye solution of different concentrations was prepared by diluting the stock solution with suitable volume of double-distilled water. The initial solution pH was adjusted using 0.1 M HCl and 0.1 M NaOH solutions. Batch biosorption experiments Batch biosorption experiments were carried out in 250 mL glass-stopperred, Erlenmeyer flasks with 100 mL of working volume, with a concentration of 50 mg L−1. A weighed amount (2 g) of biosorbent was added to the solution. The flasks were agitated at a constant speed of 150 rpm for 3 h in an incubator shaker (Innova 42, New Brunswick Scientific, Canada) at 303 ± 1 K. The influence of pH (3.0–10.0), initial dye concentration (20–100 mg L−1), biosorbent dose (0.5–5 g L−1) and temperature (293–313 K) were evaluated during the present study. Samples were collected from the flasks at predetermined time intervals for analyzing the residual dye concentration in the solution. The residual amount of dye in each flask was investigated using UV/VIS spectrophotometer (U–2800, Hitachi, Japan). The amount of dye adsorbed per unit PLP (mg dye per g biosorbent) was calculated according to a mass balance on the dye concentration using Eq. (1): $$ q_{\rm {e}} = \frac{(C_{\rm {i}} -C_{\rm {e}})\;V}{m} $$ where C i is the initial dye concentration (mg L−1), Ce is the equilibrium dye concentration in solution (mg L−1), V is the volume of the solution (L), and m is the mass of the biosorbent (g). The percent removal (%) of dye was calculated using the following equation: $$ {\text{Removal }}(\% )\, = \,\frac{{C_{\rm {i}} \, - C_{\rm {e}}}}{{C_{\rm {i}} }}\, \times \,100 $$ All the biosorption experiments were performed in triplicate. The results are the average of three independent measurements along with standard deviation (±SD) showing 95% confidence level with a precision in most cases being ±2%. Microsoft Excel program was employed for data processing. Linear regression analyses were used to determine slopes and intercepts of the linear plots and for statistical analyses of the data. Effect of pH To investigate the effect of solution pH on the biosorption of CV by PLP, a series of batch biosorption experiments as described above were carried out over a pH range of 3–10. The results thus obtained are shown in Fig. 1. The biosorption capacity increased with increase in pH of the dye solution, appreciably up to pH 8.0. With further increase in pH, no significant change in dye binding capacity was observed. Thus, all further experiments were carried out at pH 8.0. A quite similar result was previously reported for biosorption of CV from aqueous solution by treated ginger waste (Kumar and Ahmad 2011). Effect of pH on biosorption of CV by PLP (experimental conditions: initial dye concentration 50 mg L−1, biosorbent dosage 2 g/0.1 L, agitation speed: 150 rpm, temperature 303 K, contact time 3 h) The pH of the aqueous solution affects both the surface charge of the biosorbent material as well as the degree of ionization of the dye molecule (Saha et al. 2010). PLP mainly contains –OH, –CH, –NH2 and –C–O functional groups on its surface (Chowdhury et al. 2011b). Protonation of these functional groups at low pH values renders a net negative charge to the biosorbent surface while deprotonation of the functional groups at high pH values render it positively charged. The pKa of CV is 0.8; it is completely ionized at pH greater than 0.8 and exists as cationic species (Saha et al. 2012). At low pH values, there exists a strong electronegative repulsion between the positively charged dye ions and the negatively charged biosorbent surface resulting in low dye binding capacity. However, as the pH of the dye solution increases, a considerable increase in dye binding capacity is observed due to strong electrostatic attraction between negatively charged sites on the biosorbent and the dye cations. Effect of biosorbent dose The influence of biosorbent dose on the CV biosorption capacity of PLP was investigated in the range of 0.5–5 g. The results obtained are summarized in Fig. 2. The percentage dye removal increases with increase of biosorbent dose from 0.5 to 2 g. It may be explained that increasing the biosorbent dose resulted in increased biosorbent surface area and availability of more binding sites (Saha et al. 2010). However, further increase in biosorbent dose did not significantly change the biosorption yield. This is due to the binding of almost all dye molecules to the biosorbent surface and the establishment of equilibrium between the dye molecules on the biosorbent and in the solution (Saha et al. 2010). These observations are in agreement with those reported previously by Saeed et al. (2010) for biosorption of CV by grapefruit peels. Effect of biosorbent dose on biosorption of CV by PLP (experimental conditions: initial dye concentration: 50 mg L−1, agitation speed 150 rpm, pH 8.0, temperature 303 K, contact time 3 h) Effect of initial dye concentration Figure 3 shows the biosorption performance of PLP at different initial concentration of CV in the range of 20–100 mg L−1. The adsorption capacity increased from 62.36 to 80.15 mg g−1 with increase in initial dye concentration from 20 to 100 mg L−1. The increase in dye uptake capacity can be attributed to the fact that increasing concentration gradient provides an increasing driving force to overcome all mass transfer resistances of the dye molecules between the aqueous and solid phase, leading to an increased equilibrium uptake capacity until sorbent saturation is achieved (Chowdhury and Das 2011). On the contrary, the biosorption efficiency decreased with increase in initial dye concentration. This is mainly because all sorbents have a limited number of binding sites, which become saturated at a certain concentration (Chowdhury and Das 2011). Similar results have been reported for biosorption of CV by Sagaun sawdust (Khattri and Singh 2011). Effect of initial dye concentration on biosorption of CV by PLP (experimental conditions: biosorbent dosage 2 g/0.1 L, agitation speed 150 rpm, pH 8.0, temperature 303 K, contact time 3 h) Effect of temperature Figure 4 presents the biosorption profile of CV by PLP at different temperatures. The dye removal efficiency decreased with increase in temperature over the range of 293–303 K. Increase in temperature results in weakening of the bonds between the dye molecules and the binding sites of the biosorbent leading to low dye removal efficiency (Chakraborty et al. 2011). Such a trend is indicative of the fact that biosorption of CV by PLP is kinetically controlled by an exothermic process. Similar phenomenon was also observed for biosorption of CV by NaOH-modified rice husk (Chakraborty et al. 2011). Effect of temperature on biosorption of CV by PLP (experimental conditions: initial dye concentration 50 mg L−1, biosorbent dosage 2 g/0.1 L, agitation speed 150 rpm, pH 8.0, contact time 3 h) Biosorption isotherms The Langmuir and Freundlich isotherm models were used to describe the equilibrium biosorption data of CV onto PLP (Chowdhury and Saha 2010). $$ {\text{Langmuir: }}\frac{{C_{\rm {e}} }}{{q_{\rm {e}} }}\, = \,\frac{{C_{\rm {e}} }}{{q_{\rm {m}} }}\, + \,\frac{1}{{K_{\rm {L}} \,q_{\rm {m}} }} $$ $$ {\text{Freundlich: }}\log \,q_{\rm {e}} \, = \,\log \,K_{\rm {F}} \, + \,\left(\frac{1}{n}\right)\,\log \,C_{\rm {e}} $$ where qe (mg g−1) and Ce (mg L−1) are the solid phase concentration and the liquid phase concentration of adsorbate at equilibrium, respectively, qm (mg g−1) is the maximum adsorption capacity, KL (L mg−1) is the Langmuir adsorption equilibrium constant, KF (mg g−1) (L g−1)1/n is the Freundlich constant related to sorption capacity and n is the heterogeneity factor. The parameters obtained from the Langmuir (Ce/qe vs. Ce) and Freundlich (log q e vs. log Ce) isotherm plots are listed in Table 1. To quantitatively compare the accuracy of the models, the correlation coefficients (R 2 ) were also calculated and are also listed in Table 1. Analysis of the R 2 values suggests that the Langmuir isotherm model provides best fit to the equilibrium biosorption data at all studied temperatures implying monolayer coverage of CV molecules onto the biosorbent surface. The maximum monolayer biosorption capacity (qm) is 78.22 mg g−1 at 293 K. Table 2 summarizes the comparison of the maximum CV biosorption capacity of various sorbent materials including PLP. The comparison shows that PLP has higher biosorption capacity of CV than many of the other reported sorbent materials. Differences in dye uptake capacity are due to the differences in properties of each sorbent material such as structure, functional groups and surface area. The easy availability and cost effectiveness of PLP are some additional advantages, implying that PLP can be a better biosorbent for removal of CV from aqueous solutions. Isotherm constants and kinetic parameters for biosorption of CV by PLP at different temperatures Temperature (K) Isotherm models Langmuir qm (mg g−1) KL (L mg−1) KF (mg g−1)(L mg−1)1/n Kinetic models Pseudo-first-order qe,cal (mg g−1) k1 (min−1) 8.39 × 10−2 Pseudo-second-order k2 (g mg−1 min−1) Comparison of CV biosorption capacity of PLP with other reported low-cost adsorbents Sorbent qmax (mg g−1) Coir pith Namasivayam et al. 2001 Sugarcane dust Khattri and Singh 1999 Neem sawdust Calotropis procera leaf Ali and Muhammad 2008 Sagaun sawdust Jalshakti® Polymer Dhodapkar et al. 2007 Annadurai et al. 2004 Jute fiber carbon Porkodi and Kumar 2007 Coniferous pinus bark powder Ahmad 2009 Parab et al. 2009 Wang et al. 2008 Jackfruit leaf powder Saha et al. 2012 NaOH-modified rice husk Chakraborty et al. 2011 Pineapple leaf powder This study The magnitude of the Freundlich constant n gives a measure of favorability of biosorption. Values of n between 1 and 10 represent a favorable biosorption process (Chakraborty et al. 2011). For the present study, the value of n also presents the same trend at all the temperatures indicating favorable nature of biosorption of CV by PLP. Biosorption kinetics The pseudo-first-order and pseudo-second-order kinetic models were used to study the biosorption kinetics of CV onto PLP (Chowdhury and Saha 2010). $$ {\text{Psuedo}} - {\text{first}} - {\text{order}}:\log (q_{e} - q_{t} ) = \log q_{e} - \frac{{k_{1} }}{2.303}t $$ $$ {\text{Psuedo}} - {\text{second}} - {\text{order}}:\frac{t}{{q_{t} }} = \frac{1}{{k_{2} q_{e}^{2} }} + \frac{1}{{q_{e} }}t $$ where q t and qe are the amount of dye adsorbed at time t and at equilibrium (mg g−1), k1 (min−1) is the pseudo-first-order rate constant and k2 (g mg−1 min−1) is the pseudo-second-order rate constant. The values of the pseudo-first-order model constants, k1 and qe were calculated from the slope and intercept of the plots of log (qe−q t ) versus t while the pseudo-second-order model constants, k2 and qe were calculated from the slope and intercept of the plots of t/qt versus t. The calculated model parameters along with the correlation coefficient values (R 2 ) are listed in Table 1. As can be seen from Table 1, the low R 2 (<0.90) values for the pseudo-first-order model indicate that this model was not suitable for describing the biosorption kinetics of CV onto PLP. However, the relatively high R 2 (>0.99) values for the pseudo-second-order model suggest that the ongoing biosorption process obeys pseudo-second-order kinetics at all studied temperatures. The applicability of the pseudo-second-order kinetic model indicates that the biosorption process of CV onto PLP is chemisorption and the rate-determining step is probably surface biosorption. The pseudo-second-order rate constant, k2 decreases with increase in temperature suggesting exothermic nature of the biosorption process. Activation parameters From the pseudo-second-order rate constant k2 (Table 1), the activation energy E a for biosorption of CV by PLP was determined using the Arrhenius equation (Chowdhury and Saha 2010): $$ \ln \,k = \ln \,A - \frac{{E_{a} }}{RT} $$ where k is the rate constant, A is the Arrhenius constant, E a is the activation energy (kJ mol−1), R is the gas constant (8.314 J mol−1 K−1) and T is the temperature (K). By plotting ln k2 versus 1/T, E a was obtained from the slope of the linear plot and was estimated to be 58.96 kJ mol−1. According to literature, biosorption of CV by PLP follows chemisorption (Chowdhury et al. 2011b). The Eyring equation was used to calculate the standard enthalpy (ΔH#), and entropy of activation (ΔS#) (Chowdhury et al. 2011a): $$ \frac{\ln k}{T} = \ln \frac{{k_{B} }}{h} + \frac{{\Delta S^{\# } }}{R} - \frac{{\Delta H^{\# } }}{RT} $$ where k is the rate constant, k B is the Boltzman constant (1.3807 × 10−23 J K−1), h is the Plank constant (6.6261 × 10−34 Js), R is the gas constant (8.314 J mol−1 K−1) and T is the temperature (K). The values of ΔH# and ΔS# were calculated from the slope and intercept of the plot of ln k2/T versus 1/T and were found to be −0.146 kJ mol−1 for ΔH# and −198.18 J mol−1 K−1 for ΔS#. The negative value of ΔH# (=−0.146 kJ mol−1) indicates exothermic nature of the biosorption process. The negative value of ΔS# suggests that biosorption of CV onto PLP is an associative mechanism (Chowdhury et al. 2011a). The values of ΔH# and ΔS# were used to compute the free energy of activation (ΔG#) from the relation: $$ \Delta G^{\# } = \Delta H^{\# } - T\Delta S^{\# } $$ The values of ΔG# were found to be 58.21, 60.19, 62.17 kJ mol−1 at T = 293, 303, and 313 K respectively. The large positive values of ΔG# suggest that energy was required in the biosorption reaction to convert reactants into products. Thermodynamic parameters The thermodynamic parameters–Gibbs free energy change (ΔG0), enthalpy (ΔH0) and entropy (ΔS0) for the biosorption process were calculated using the following equations for the temperature range 293–313 K (Chowdhury and Saha 2010): $$ \Delta G^{0} = - RT\ln K_{C} $$ $$ K_{C} = \frac{C_{a}}{C_{e}} $$ $$ \Delta G^{0} = \Delta H^{0} - T\Delta S^{0} $$ where K c is the distribution coefficient for adsorption, C a is the equilibrium dye concentration on the adsorbent (mg L−1) and C e is the equilibrium dye concentration in solution (mg L−1). The calculated Gibbs free energy (ΔG0) values at different temperatures for biosorption of CV onto PLP are listed in Table 3. The values of ΔH0 and ΔS0 were determined from the slope and intercept of the plot of ΔG0 versus T and are also listed in Table 3. The negative value of ΔG0 at different temperatures indicates spontaneous nature of the biosorption process. The negative value of ΔH0 indicates that the biosorption reaction is exothermic while the negative value of ΔS0 suggests that the process is enthalpy driven. Thermodynamic parameters for biosorption of CV onto PLP ΔG0 (kJ mol−1) ΔH0 (kJ mol−1) ΔS0 (J mol−1 K−1) −18.25 −171.29 Biosorption potential of PLP to remove CV from aqueous solutions was investigated. Batch experiments were carried out as function of solution pH, initial dye concentration, biosorbent dose and temperature. Both temperature and pH were found to have a strong influence on the biosorption process. The biosorption efficiency decreased with increase in initial dye concentration while it increased with increase in biosorbent dose up to a certain level. The Langmuir isotherm showed best fit to the equilibrium biosorption data with maximum monolayer biosorption capacity of 78.22 mg g−1 at 293 K. Kinetic studies showed that the biosorption process followed pseudo-second-order kinetics. The activation energy (E a ) determined using the Arrhenius equation confirmed that biosorption of CV by PLP involved chemical ion-exchange. Thermodynamic studies showed that the biosorption process was spontaneous and exothermic. Compared to various other sorbents reported in the literature, PLP appears to be a promising biosorbent for practical applicability due to its easy availability and high dye binding capacity. However, to apply this environmentally friendly and efficient biosorbent for the removal of contaminants from real industrial effluents, continuous column studies need to be performed. Ahmad R (2009) Studies on adsorption of crystal violet dye from aqueous solution onto coniferous pinus bark powder (CPBP). J Hazard Mater 171:767–773CrossRefGoogle Scholar Ali H, Muhammad SK (2008) Biosorption of crystal violet from water on leaf biomass of Calotropis procera. J Environ Sci Technol 1:143–150CrossRefGoogle Scholar Annadurai G, Juang R-S, Lee D-J (2004) Use of cellulose-based wastes for adsorption of dyes from aqueous solutions. J Hazard Mater 92:263–274CrossRefGoogle Scholar Chakraborty S, Chowdhury S, Saha PD (2011) Adsorption of crystal violet from aqueous solution onto NaOH-modified rice husk. Carbohydr Polym 86:1533–1541CrossRefGoogle Scholar Chowdhury S, Das P (2011) Mechanistic kinetic and thermodynamic evaluation of adsorption of hazardous malachite green onto conch shell powder. Sep Sci Technol 46:1966–1976CrossRefGoogle Scholar Chowdhury S, Saha P (2010) Sea shell powder as a new adsorbent to remove Basic Green 4 (Malachite Green) from aqueous solutions: equilibrium kinetic and thermodynamic studies. Chem Eng J 164:168–177CrossRefGoogle Scholar Chowdhury S, Mishra R, Kushwaha P, Saha P (2010) Removal of safranin from aqueous solutions by NaOH-treated rice husk: thermodynamics kinetics and isosteric heat of adsorption. Asia-Pac J Chem Eng. doi:101002/apj525Google Scholar Chowdhury S, Mishra R, Saha P, Kushwaha P (2011a) Adsorption thermodynamics kinetics and isosteric heat of adsorption of malachite green onto chemically modified rice husk. Desalination 265:159–168CrossRefGoogle Scholar Chowdhury S, Chakraborty S, Saha P (2011b) Biosorption of Basic Green 4 from aqueous solution by Ananas comosus (pineapple) leaf powder. Coll Surf B 84:520–527CrossRefGoogle Scholar Demribas A (2009) Agricultural based activated carbons for the removal of dyes from aqueous solutions: a review. J Hazard Mater 167:1–9CrossRefGoogle Scholar Dhodapkar R, Rao NN, Pande SP, Nandy T, Devotta S (2007) Adsorption of cationic dyes on Jalshakti super adsorbent polymer and photocatalytic regeneration of the adsorbent. React Funct Polym 67:540–548CrossRefGoogle Scholar Farooq U, Kozinski JA, Khan MA, Athar M (2010) Biosorption of heavy metal ions using wheat based biosorbents—a review of the recent literature. Bioresour Technol 101:5043–5053CrossRefGoogle Scholar Gupta VK, Suhas (2009) Application of low-cost adsorbents for dye removal—a review. J Environ Manage 90:2313–2342CrossRefGoogle Scholar Khattri SD, Singh MK (1999) Colour removal from dye wastewater using sugarcane dust as an adsorbent. Adsorpt Sci Technol 17:269–282Google Scholar Khattri SD, Singh MK (2009) Colour removal from synthetic dye wastewater using a biosolid sorbent. Water Air Soil Pollut 120:283–294CrossRefGoogle Scholar Khattri SD, Singh MK (2011) Use of Sagaun sawdust as an adsorbent for the removal of crystal violet dye from simulated wastewater. Environ Prog Sustain Energy. doi:101002/ep10567Google Scholar Kumar R, Ahmad R (2011) Biosorption of hazardous crystal violet dye from aqueous solution onto treated ginger waste (TGW). Desalination 265:112–118CrossRefGoogle Scholar Namasivayam C, Kumar MD, Selvi K, Begum RA, Vanathi T, Yamuna RT (2001) 'Waste' coir pith—a potential biomass for the treatment of dyeing wastewaters. Biomass Bioenergy 6:477–483CrossRefGoogle Scholar Parab H, Sudersanan M, Shenoy N, Pathare T, Vaze B (2009) Use of agro-industrial wastes for removal of basic dyes from aqueous solutions. Clean Soil Air Water 37:963–969CrossRefGoogle Scholar Porkodi K, Kumar KV (2007) Equilibrium kinetics and mechanism modeling and simulation of basic and acid dyes sorption onto jute fiber carbon: Eosin yellow malachite green and crystal violet single component systems. J Hazard Mater 143:311–327CrossRefGoogle Scholar Rafatullah M, Sulaiman O, Hashim R, Ahmad A (2010) Adsorption of methylene blue on low-cost adsorbents: a review. J Hazard Mater 177:70–80CrossRefGoogle Scholar Saeed A, Sharif M, Iqbal M (2010) Application potential of grapefruit peel as dye sorbent: kinetics equilibrium and mechanism of crystal violet adsorption. J Hazard Mater 179:564–572CrossRefGoogle Scholar Saha P, Chowdhury S, Gupta S, Kumar I (2010) Insight into adsorption equilibrium kinetics and thermodynamics of Malachite Green onto clayey soil of Indian origin. Chem Eng J 165:874–882CrossRefGoogle Scholar Saha PD, Chakraborty S, Chowdhury S (2012) Batch and continuous (fixed-bed column) biosorption of crystal violet by Artocarpus heterophyllus (jackfruit) leaf powder. Coll Surf B 92:262–270CrossRefGoogle Scholar Wang XS, Liu X, Wen L, Zhou Y, Li Z (2008) Comparison of basic dye crystal violet removal from aqueous solution by low-cost biosorbents. Sep Sci Technol 43:3712–3731CrossRefGoogle Scholar This article is published under license to BioMed Central Ltd. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. 1.Department of BiotechnologyNational Institute of Technology-DurgapurDurgapurIndia Chakraborty, S., Chowdhury, S. & Saha, P.D. Appl Water Sci (2012) 2: 135. https://doi.org/10.1007/s13201-012-0030-9 Received 17 October 2011 Accepted 02 February 2012 First Online 21 February 2012 This article is published under an open access license. Please check the 'Copyright Information' section for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team. King Abdulaziz City for Science and Technology Not logged in Not affiliated 54.92.148.165
CommonCrawl
APS Home Meeting Announcement Using the Scheduler BAPS PDFs 2008 APS March Meeting Monday–Friday, March 10–14, 2008; New Orleans, Louisiana Session X40: Self Assembled Protein Cages Sponsoring Units: DBP Chair: William Klug, University of California, Los Angeles Room: Morial Convention Center 232 X40.00001: A minimal model for protein coat dynamics in intracellular vesicular transport Ranjan Mukhopadhyay, Hui Wang, Greg Huber Within eukaryotic cells, proteins are transported by vesicles formed from coated regions of membranes. The assembly of coat proteins deforms the membrane patch and drives vesicle formation. Once the vesicle has pinched off, the protein coat rapidly disassembles. Motivated by recent experimental results, we propose a minimal model for the dynamics of coat assembly and disassembly and study the spatio-temporal behavior of the system. We will show that for a range of parameters, our model can robustly generate a steady state distribution of protein clusters with characteristic sizes and will obtain the scaling behavior of average cluster size with the parameters of the model. We will also discuss the coupling of coat dynamics to sorting of cargo proteins. [Preview Abstract] X40.00002: The study of viral assembly with fluorescence fluctuation spectroscopy Joachim Mueller, Bin Wu, Yan Chen Enveloped viruses contain an encapsulating membrane that the virus acquires from the host cell during the budding process. The presence of the enveloping lipid membrane complicates the physical characterization of the proteins assembled within the virus considerably. Here we present a method based on fluorescence fluctuations that quantifies the copy number of proteins within an enveloped viral particles. We choose the viral protein Gag of the human immunodeficiency virus (HIV) type 1 as a model system, because Gag expressed in cells is sufficient to produce viral-like particles (VLPs) of the same size as authentic virions. VLPs harvested from cells that express fluorescently labeled Gag were investigated by two-photon fluorescence fluctuation spectroscopy. The autocorrelation functions of the fluctuations revealed a hydrodynamic size of the fluorescent VLPs consistent with previous results based on electron microscopy. Further analysis of the fluctuations revealed a copy number of Gag per virion that is inconsistent with the prevailing model of HIV assembly. We will discuss the implications of our experimental results for the assembly process of VLPs. [Preview Abstract] X40.00003: Spherical Proteins and Viral Capsids Studied by Theory of Elasticity Zheng Yang, Ivet Bahar, Michael Widom Coarse-grained elastic network models have been successful in elucidating the fluctuation dynamics of proteins around their native conformations. It is well established that the low-frequency collective motions derived by simplified normal mode analysis depend on the overall 3-dimensional shape of the biomolecule. Given that the large scale collective motions are usually involved in biological function, our objective in this work is to gain more insights into large scale collective motions of spherical proteins and virus capsids by considering a continuous model with perfect spherical symmetry. To this end, we compare the global dynamics of proteins and the analytical solutions from an elastic wave equation with spherical boundary conditions. In addition, an icosahedral discrete model is generated and analyzed for validating our continuous model. Applications to lumazine synthase, satellite tobacco mosaic virus and other viruses shows that the spherical elastic model can efficiently provide insights on collective motions that are otherwise obtained by detailed elastic network models. [Preview Abstract] X40.00004: Low frequency mechanical modes of viruses with atomic detail Eric Dykeman, Otto Sankey The low frequency mechanical modes of viruses can provide important insights into the large global motions that a virus may exhibit. Recently it has been proposed that these large global motions may be excited using impulsive stimulated Raman scattering producing permanent damage to the virus. In order to understand the coupling of external probes to the capsid, vibrational modes with atomic detail are essential. The standard approach to find the atomic modes of a molecule with $N$ atoms requires the formation and diagonlization of a $3N\times 3N$ matrix. As viruses have $10^5$ or more atoms, the standard approach is difficult. Using ideas from electronic structure theory, we have developed a method to construct the mechanical modes of large molecules such as viruses with atomic detail. Application to viruses such as the cowpea chlorotic mottle virus, satellite tobacco necrosis virus, and M13 bacteriophage show a fairly complicated picture of the mechanical modes. [Preview Abstract] X40.00005: Diversity of in-vivo assembled HIV-1 capsids Se Il Lee, Toan Nguyen Understanding the capsid assembly process of Human Immunodeficiency Virus (HIV), the causative agent of Acute Immuno Deficiency Syndrom (AIDS), is very important because of recent intense interest in capsid-oriented viral therapy. The unique conical shapes of mature HIV-1 capsid have drawn significant interests in the biological community and started to attract attention from the physics community. Previous studies showed that in a free assembly process, the HIV-1 conical shape is not thermodynamically stable. However, if the volume of the capsid is constrained during assembly and the capsid protein shell has high spontaneous curvature, the conical shape is stable. In this work, we focus on in-vivo HIV-1 capsid assembly. For this case, the viral envelope membrane present during assembly imposes constraint on the length of the capsid. We use an elastic continuum shell theory to approximate the energies of various HIV-1 capsid shapes (spherical, cylindrical and conical). We show that for certain range of viral membrane diameter, the conical and cylindrical shapes are both thermodynamically stable. This result is supported by experimental observation that in-vivo assembled HIV-1 capsids are very heterogeneous in shapes and sizes. Numerical calculation is also performed to improve theoretical approximation. [Preview Abstract] X40.00006: An elastic model of partial budding of retroviruses Rui Zhang, Toan Nguyen Retroviruses are characterized by their unique infection strategy of reverse transcription, in which the genetic information flows from RNA back to DNA. The most well known representative is the human immunodeficiency virus (HIV). Unlike budding of traditional enveloped viruses, retrovirus budding happens together with the formation of spherical virus capsids at the cell membrane. Led by this unique budding mechanism, we proposed an elastic model of retrovirus budding in this work. We found that if the lipid molecules of the membrane are supplied fast enough from the cell interior, the budding always proceeds to completion. In the opposite limit, there is an optimal size of partially budded virions. The zenith angle of these partially spherical capsids, $\alpha$, is given by $\alpha\simeq(\tau^2/\kappa\sigma)^{1/4}$, where $\kappa$ is the bending modulus of the membrane, $\sigma$ is the surface tension of the membrane, and $\tau$ characterizes the strength of capsid protein interaction. If $\tau$ is large enough such that $\alpha\sim\pi$, the budding is complete. Our model explained many features of retrovirus partial budding observed in experiments. [Preview Abstract] X40.00007: Calibrating elastic parameters from molecular dynamics simulations of capsid proteins Stephen Hicks, Christopher Henley Virus capsids are modeled with elastic network models in which a handful of parameters determine transitions in assembly [1] and morphology [2]. We introduce an approach to compute these parameters from the microscopic structure of the proteins involved. We consider each protein as one or a few rigid bodies with very general interactions, which we parameterize by fitting the simulated equilibrium fluctuations (relative translations and rotations) of a pair of proteins (or fragments) to a 6-dimensional Gaussian. We can then compose these generalized springs into the global capsid structure to determine the continuum elastic parameters. We demonstrate our approach on HIV capsid protein and compare our results with the observed lattice structure (from cryo-EM [3] and AFM indentation studies).\\{} [1] R. Zandi et al, PNAS 101 (2004) 15556.\\{} [2] J. Lidmar, L. Mirny, and D. R. Nelson, PRE 68 (2003) 051910.\\{} [3] B. K. Ganser-Pornillos et al, Cell 131 (2007) 70. [Preview Abstract] X40.00008: Coarse-grained mechanics of viral shells William S. Klug, Melissa M. Gibbons We present an approach for creating three-dimensional finite element models of viral capsids from atomic-level structural data (X-ray or cryo-EM). The models capture heterogeneous geometric features and are used in conjunction with three-dimensional nonlinear continuum elasticity to simulate nanoindentation experiments as performed using atomic force microscopy. The method is extremely flexible; able to capture varying levels of detail in the three-dimensional structure. Nanoindentation simulations are presented for several viruses: Hepatitis B, CCMV, HK97, and $\phi$29. In addition to purely continuum elastic models a multiscale technique is developed that combines finite-element kinematics with MD energetics such that large-scale deformations are facilitated by a reduction in degrees of freedom. Simulations of these capsid deformation experiments provide a testing ground for the techniques, as well as insight into the strength-determining mechanisms of capsid deformation. These methods can be extended as a framework for modeling other proteins and macromolecular structures in cell biology. [Preview Abstract] X40.00009: ABSTRACT HAS BEEN MOVED TO SESSION C1 [Preview Abstract] X40.00010: Biochemistry in the Nanopores Samir M. Iqbal, Bala Murali Venkatesan, Demir Akin, Rashid Bashir Solid-state technology is fast advancing novel nano-structures for biomolecular detection. The solid-state nanopores have emerged as potential replacement of the Sanger's method for DNA sequencing. While the passage of the DNA molecule through the nanopore has been reported extensively, little has been done to identify the individual base pairs or sequences within the molecule. Learning from the mechanics of ion-channels on the cell surface, we functionalized the solid-state nanopores to recognize and selectively regulate the flow of molecules though the pore. The probe DNA was immobilized by chemical adsorption, and target DNA was passed under electrophoretic bias. The single base mismatch selectivity was achieved by using a hairpin loop in the probe. We could thus identify between the perfect complementary and mismatched target molecules. We will expand on the theoretical framework that governs the interactions of the probe and target molecules, as observed from the pulse behavior. [Preview Abstract] X40.00011: Poisson pulsed control of particle escape Marie McCrary, Lora Billings, Ira Schwartz, Mark Dykman We consider the problem of escape in a double well potential. With a weak background Gaussian noise, the escape rate is well known and follows an exponential scaling with the noise intensity $D$. Here, we consider adding a small Poisson noise to the Gaussian noise. We compute the change in escape time as we add Poisson distributed pulses of a given duration and amplitude. The escape rate acquires an extra factor which is determined by the characteristic functional of the Poisson noise calculated for a function, which is determined by the system dynamics and is inversely proportional to $D$. As a result, for small $D$ even weak Poisson pulses can lead to a significant change of the escape rate. The Poisson noise induced factor depends sensitively on the interrelation between the noise correlation time and the relaxation time of the system. We compare analytical results with extensive numerical simulations. The numerical computation of escape rates for multiple interacting particles in a well will also be shown. [Preview Abstract] X40.00012: Quorum sensing and biofilm formation investigated using laser-trapped bacterial arrays Vernita Gordon, John Butler, Ivan Smalyukh, Matthew Parsek, Gerard Wong Studies of individual, free-swimming (planktonic) bacteria have yielded much information about their genetic and phenotypic characteristics and about ``quorum sensing,'' the autoinducing process by which bacteria detect high concentrations of other bacteria. However, in most environments the majority of bacteria are not in the planktonic form but are rather in biofilms, which are highly-structured, dynamic communities of multiple bacteria that adhere to a surface and to each other using an extracellular polysaccharide matrix. Bacteria in biofilms are phenotypically very different from their genetically-identical planktonic counterparts.~ Among other characteristics, they are much more antibiotic-resistant and virulent.~ Such biofilms form persistent infections on medical implants and in the lungs of cystic fibrosis patients, where Pseudomonas aeruginosa biofilms are the leading cause of lung damage and, ultimately, death.~ To understand the importance of different extracellular materials, motility mechanisms, and quorum sensing for biofilm formation and stability, we use single-gene knockout mutants and an infrared laser trap to create a bacterial aggregate that serves as a model biofilm and allows us to measure the importance of these factors as a function of trapping time, surface, and nutritional environment. [Preview Abstract] X40.00013: Self-Polarization of Cells in Elastic Gels Assaf Zemel, Samuel Safran The shape of a cell as well as the rigidity and geometry of its surroundings play an important role in vital cellular processes. The contractile activity of cells provides a generic means by which cells may sense and respond to mechanical features. The matrix stresses, that depend on the elasticity and geometry of cells, feedback on the cells and influence their activity. This suggests a mechanical mechanism by which cells control their shape and forces. We present a quantitative, mechanical model that predicts that cells in an elastic medium can self-polarize to form well ordered stress fibers. We focus on both single cells in a gel, as well as on an ensemble of cells that is confined to some region within the gel. While the \textit{magnitude} of the cellular forces is found to increase monotonically with the matrix rigidity the \textit{anisotropy} of the forces, and thus the ability of the cells to polarize, is predicted to depend non-monotonically on the medium's rigidity. We discuss these results with experimental findings and with the observation of an optimal medium elasticity for cell function and differentiation. [Preview Abstract] X40.00014: Active suspensions in shear flow A. Ahmadi, M.C. Marchetti, T.B. Liverpool We report on the structure and rheology of an active suspension of cytoskeletal filaments and motor proteins in shear flow. Hydrodynamics equations for an active suspension were derived earlier by us [arXiv:q-bio.CB/0703029v1] by coarse-graining the Smoluchowski equation for a model of filaments and motors. The model incorporates the coupling of orientational order to flow and accounts for the exchange of momentum between filaments and solvent. In the present study we investigate the role of active crosslinkers on the formation and stability of ordered states (polar and nematic) under external shear flow. We also study the effect of motor activity on the rheological behavior of the ordered states away from boundaries. This work may also be relevant for the understanding of the flow-driven reorientation of endothelial cells under the shear stress imposed by blood flow. [Preview Abstract] X40.00015: Selective advantage for sexual replication with random haploid fusion Emmanuel Tannenbaum This talk develops a simplified set of models describing asexual and sexual replication in unicellular diploid organisms. The models assume organisms whose genomes consist of two chromosomes, where each chromosome is assumed to be functional if and only if it is equal to some master sequence. The fitness of an organism is determined by the number of functional chromosomes in its genome. For a population replicating asexually, a cell replicates both of its chromosomes, and then divides and splits its genetic material evenly between the two cells. For a population replicating sexually, a given cell first divides into two haploids, which enter a haploid pool. Within the haploid pool, haploids fuse into diploids, which then divide via the normal mitotic process. When the cost for sex is small, as measured by the ratio of the characteristic haploid fusion time to the characteristic growth time, we find that sexual replication with random haploid fusion leads to a greater mean fitness for the population than a purely asexual strategy. The results of this talk are consistent with previous studies suggesting that sex is favored at intermediate mutation rates, for slowly replicating organisms, and at high population densities. [Preview Abstract]
CommonCrawl
Observation of parity-time symmetry breaking transitions in a dissipative Floquet system of ultracold atoms Jiaming Li1, Andrew K. Harter ORCID: orcid.org/0000-0002-9787-98022, Ji Liu2, Leonardo de Melo1,2, Yogesh N. Joglekar ORCID: orcid.org/0000-0002-3222-14362 & Le Luo ORCID: orcid.org/0000-0002-8375-63261,2 Nature Communications volume 10, Article number: 855 (2019) Cite this article Ultracold gases Open physical systems with balanced loss and gain, described by non-Hermitian parity-time \(\left( {{\cal P}{\cal T}} \right)\) reflection symmetric Hamiltonians, exhibit a transition which could engender modes that exponentially decay or grow with time, and thus spontaneously breaks the \({\cal P}{\cal T}\)-symmetry. Such \({\cal P}{\cal T}\)-symmetry-breaking transitions have attracted many interests because of their extraordinary behaviors and functionalities absent in closed systems. Here we report on the observation of \({\cal P}{\cal T}\)-symmetry-breaking transitions by engineering time-periodic dissipation and coupling, which are realized through state-dependent atom loss in an optical dipole trap of ultracold 6Li atoms. Comparing with a single transition appearing for static dissipation, the time-periodic counterpart undergoes \({\cal P}{\cal T}\)-symmetry breaking and restoring transitions at vanishingly small dissipation strength in both single and multiphoton transition domains, revealing rich phase structures associated to a Floquet open system. The results enable ultracold atoms to be a versatile tool for studying \({\cal P}{\cal T}\)-symmetric quantum systems. A non-Hermitian parity-time reflection symmetric (\({\cal P}{\cal T}\)-symmetric) Hamiltonian, that is invariant under combined parity \(\left( {\cal P} \right)\) and time-reversal \(\left( {\cal T} \right)\) operations, has been considered as a natural extension of the conventional Hermitian quantum theory to describe an open quantum system with balanced loss and gain1,2,3. \({\cal P}{\cal T}\)-symmetric Hamiltonians exhibit many interesting behaviors4,5,6,7,8,9, in which a key property is \({\cal P}{\cal T}\)-symmetry-breaking transitions that occur at an exceptional point10,11 – a point in the parameter space where two resonant modes of the Hamiltonian become degenerate. A number of seminal studies1,12 have shown that the eigenvalues of the Hamiltonian are real in one side of the transition, allowing the \({\cal P}{\cal T}\)-symmetric (PTS) phase, while complex eigenvalues appear in the other side with the \({\cal P}{\cal T}\)-symmetric broken (PTSB) phase. In recent years, \({\cal P}{\cal T}\)-symmetric Hamiltonians have been realized in balanced gain and loss systems with various setups, such as mechanical oscillators13, optical waveguides14,15, optical resonators16, microwave cavities17, lasers18, and optomechanical systems19, or in a state-dependent pure lossy system in which the lossy Hamiltonian H′ could be mapped to a \({\cal P}{\cal T}\)-symmetric Hamiltonian HPT for passive \({\cal P}{\cal T}\)-symmetry breaking20,21,22. \({\cal P}{\cal T}\)-transitions can be induced either by increasing the strength of dissipation or by tuning the periodicity of the dissipation, known as static or Floquet method respectively. By driving the system passing the exceptional point, it is predicted that \({\cal P}{\cal T}\)-transitions can reduce the overall dissipation of the system20,23,24 and allow topological structures around the exceptional point19,25. Floquet method is particular interesting because time-periodic modulation can break the continuous time translation symmetry, providing an enriched phase diagram with many fascinating features20. Here we present an experimental study of \({\cal P}{\cal T}\)-symmetry-breaking transitions induced by time-periodic dissipation or coupling in a two-spin system of ultracold atoms. Our experimental results verify that \({\cal P}{\cal T}\)-symmetry breaking and restoring transitions can occur by tuning either dissipative or coupling frequency even at vanishingly small dissipation strength. We further map the Floquet \({\cal P}{\cal T}\)-phase diagrams by tracing the atom loss of each spin state, and observe the multiphoton resonances and the power broadening associated to the PTSB phase. \({\cal P}{\cal T}\) transition with static dissipation We prepare a noninteracting Fermi gas of 6Li atoms at the two lowest 2S1/2 hyperfine levels26,27, labeled as |↑〉 and |↓〉. These two-spin states are coupled by a radio-frequency (RF) field with a coupling strength of J. A resonant optical beam is used to excite the atoms from |↓〉 to the 2P excited state \(\left| e \right\rangle\) and generates the atom loss in |↓〉 with a rate of Γ (Fig. 1a). The Hamiltonian for this dissipative two-spin system is given by $$H(t) = J\sigma _{\mathrm{x}} - i\Gamma (t)\left| \downarrow \right\rangle \left\langle \downarrow \right| = - i{\mathrm{\Gamma }}(t){\mathrm{/}}2{\bf{I}} + H_{{\mathrm{PT}}}(t)$$ where HPT(t) = Jσx + iΓ(t)σz/2 is a \({\cal P}{\cal T}\)-symmetric Hamiltonian, and I is the unit matrix. The system is prepared with all atoms in |↑〉, and evolves for a time of t. Then the in-trap atom numbers, \(n_ \uparrow ^\prime (t)\) and \(n_ \downarrow ^\prime (t)\), are measured by the double-shot absorption imaging of the two-spin states, giving the total atom number \(n{\prime}(t) = n_ \uparrow ^\prime + n_ \downarrow ^\prime\). We map n′(t) to a scaled, normalized atom number n(t) associated to HPT, and then use n(t) to characterize the \({\cal P}{\cal T}\)-transitions [See Methods]. The parity-time transitions induced by time-periodic modulations. a Experimental setup. An RF field is used to couple the two-spin states. A resonant optical beam is used to generate spin-dependent dissipation (atom loss) in the |↓〉 state. b Γ(t) for time-periodic dissipation. c J(t) for time-periodic coupling. d–f Time-periodic dissipation: d The phase diagram near the primary resonance. The red color region represents the \({\cal P}{\cal T}\)-symmetric broken (PTSB) phase with (|μ+| − |μ−|)/(|μ+| + |μ−|) as the value of the color density, as the following figures. e n(t) of the \({\cal P}{\cal T}\)-symmetric (PTS) phase at Ωd/J = 1.65 (blue circles) and Ωd/J = 2.40 (black diamonds). f n(t) of the PTSB phase at Ωd/J = 2.01 (red boxes). g–i Time-periodic coupling: g The phase diagram near the primary resonance. h n(t) of the PTS phase at Ωd/J = 0.67 (blue circles) and Ωd/J = 1.25 (black diamonds). i n(t) of the PTSB phase at Ωd/J = 0.99 (red boxes). In e, f, h, i, the data with solid shapes corresponds with the dissipation (coupling) on, and the data with empty shapes are with the dissipation (coupling) off. In all figures, the black curves are the numerical simulation without free parameter, and the pink curves are the sinusoidal (for PTS) or exponential (for PTSB) fitting. J = π × 2.15 kHz for all data presented in this paper if not mentioned. The error bars are the standard deviation of the measurements For static dissipation, Γ(t) is a constant value of Γ0. When Γ0/J < 2, the eigenvalues of HPT are real values of \(\pm \sqrt {J^2 - {\mathrm{\Gamma }}_0^2{\mathrm{/}}4}\), and n(t) oscillates at frequency \(\pi {\mathrm{/}}\sqrt {J^2 - \Gamma _0^2{\mathrm{/}}4}\). The \({\cal P}{\cal T}\) transition occurs at Γ0/J = 2 where the oscillation period diverges. When Γ0/J > 2, the eigenvalues of HPT become complex numbers, and one of the eigenmode exponentially grows [See Supplementary Note 1]. These predictions are verified in our experiments [See Supplementary Fig. 1], and the measured exceptional point agrees with the theoretical model very well [See Supplementary Fig. 2]. The static dissipation experiment is related to the previous quantum zeno effect (QZE) experiments of ultracold atoms with strong-loss induced measurement28,29,30,31. In those experiments, QZE refers to the reduction of the rate of transferring from one state to a second state by the projection measurement of the second state. Due to a strong-loss induced irreversible measurement, the reverse-transfer probability from the second state to the first one is treated as zero as well as the occupation of the second state. However, in our dissipation experiment, the transfer probability from the second to the first level is nonzero, and the PTSB phase refers to the slow-down of the decay of the total atom number. Thus, the results cannot be explained purely in terms of QZE, except for the limit of an extremely strong dissipation case, in which the strong atom loss can be treated as an irreversible projection measurement of the second level. Observation of the \({\cal P}{\cal T}\)-transitions with time-periodic driving Floquet method enriches the phase diagram of a \({\cal P}{\cal T}\)-symmetric system by periodically modulating Hamiltonian H(t) = H(t + T). Previously, the extraordinary structure of the phase diagram has been theoretically predicted20,23, but has never been verified experimentally due to the difficulty of precisely controlling the time-dependent dissipation. In our experiment, the optical and RF field provide versatile tools to manipulate the atom loss and coupling of spin levels, so that two types of Floquet Hamiltonians could be implemented: spin-dependent time-periodic dissipation and time-periodic coupling between two spins. We first study time-periodic dissipation, in which a square-wave resonant beam is applied to generate time-dependent dissipation of the atoms in an optical trap. The coupling strength J is fixed and the dissipation strength is modulated between Γ and 0 with a frequency of Ωd (Fig. 1b). In contrast with the static dissipation, \({\cal P}{\cal T}\)-transitions under time-periodic dissipation depends on the modulation frequency and can occur at vanishingly small dissipation strength with infinite numbers of the resonance peaks [See the Supplementary Note 2]. The primary resonance peak of the \({\cal P}{\cal T}\)-transition appears Ωd/J = 2, where the transition behavior of n(t) in the weak dissipation limit \({\mathrm{\Gamma /}}J = 0.2 \ll 2\) is shown in Fig. 1d–f. When Ωd/J is tuned to the PTSB phase, n(t) increases exponentially, in contrast with the PTS phase where n(t) exhibits bounded oscillation n(t) ∝ sin[(Ωd − 2J)t]. In the above cases, the PTSB phases have been observed even when the eigenvalues of H(t) are real all the time. Such \({\cal P}{\cal T}\)-symmetry breaking can be determined by the non-unitary time-evolution operator GPT [See Methods], which has two eigenvalues \(\mu _ \pm \propto e^{ - i\epsilon _ \pm t}\). \(\epsilon _ \pm\) is the quasienergies of the effective Floquet Hamiltonian [See Supplementary Note 3]. If the magnitude of μ± are equal, \(e^{i\epsilon _ \pm T}\) is a pure phase factor and \(\epsilon _ \pm\) must be the real numbers, which indicate a PTS phase. On the contrary, the unequal magnitude of μ± denote the complex values of \(\epsilon _ \pm\) representing a PTSB phase. For time-periodic coupling in the weak dissipation limit, Γ/J is constant and J(t) is modulated at the frequency Ωc (Fig. 1c) and the the primary resonance peak of the PTSB phase is at Ωc/J = 1 (Fig. 1g). When Ωc/J is tuned to the primary resonance region, n(t) shows the similar behavior as time-periodic dissipation, where the exponential increase of n(t) appears (Fig. 1i), while the PTS phase exhibits the bound oscillation which could be parameterized by n(t) ∝ sin[(Ωc − J)t] in the weak dissipation limit (Fig. 1h). The measurements of that primary resonance of the PTSB phase verify that \({\cal P}{\cal T}\)-symmetry transitions can happen with an arbitrary small dissipation under time-periodic driving. Furthermore, there exist infinite numbers of transitions induced by multiphoton resonances which are investigated as follows. Multiphoton resonances with time-periodic dissipation For \({\cal P}{\cal T}\)-transitions with time-periodic dissipation, there exist infinite numbers of the PTSB phases induced by multiphoton process in a non-Hermitian Rabi model20. Their widths have been predicted to decrease with the index number of the the resonances. For a square-wave modulation, the widths of the PTSB phases in the weak dissipation limit are $$\delta {\mathrm{\Omega }}_{\mathrm{d}}({\mathrm{\Gamma }},{\mathrm{\Omega }}_{\mathrm{n}}) = \frac{{\mathrm{\Gamma }}}{\pi }\left( {\frac{{{\mathrm{\Omega }}_{\mathrm{n}}}}{{2J}}} \right)^2,$$ where Ωn = 2J/n is the resonance peak under zero dissipation with n as the odd number 1, 3, 5 …. Γ is the magnitude of the square-wave dissipation [See Supplementary Note 2]. Figure 2 show the broadening of the PTSB phases for one- (primary), three-, and five-photon resonances. To measure the width of the resonances, the residual atom number n(tf, Ω) is probed at a fixed time point tf for various modulation frequencies Ω [See Supplementary Note 4]. It is noted that, for the purpose of mapping the phase diagram, it is ideal to choose tf as large as possible so that n(tf) can reflect the long-term dynamics. However, because we map a pure lossy system to a \({\cal P}{\cal T}\)-symmetric Hamiltonian, tf must be remained in a finite range for the reasonable signal-to-noise ratio of the unscaled atom number n′(t) [See Supplementary Note 5]. In our experiment, we choose tf to be larger than several oscillation periods so that n(tf, Ω) can present the trend of increasing in the PTSB phase. As shown in Fig. 2a, the half width at half maximum (HWHM) of the primary resonance is proportional to the strength of time-periodic dissipation. Such behavior is the non-Hermitian analog of the resonance broadening induced by the Bloch–Siegert shifts of a strong driving Hermitian system. The width of the residual atom number also depends on the probe time and gets narrower for the longer probing times, which approaches the width of the PTSB phase predicted by theoretical calculations. For the finite probe time, the width is a qualitatively measure of the dependence of the PTSB regime on the dissipation strength [See Supplementary Note 4]. Detecting the resonances with time-periodic dissipation. a n(tf, Ω) around the one-photon resonance Ωn = 2J. Purple for Γ/J = 0.22, tf = 1.98 ms; Yellow for Γ/J = 0.11, tf = 3.31 ms; Red for Γ/J = 0.082, tf = 3.97 ms; and Blue for Γ/J = 0.05, tf = 5.29 ms. b n(tf, Ω) around the three-photon resonance Ωn = 2J/3. Red for Γ/J = 0.11, tf = 2.80 ms; Blue for Γ/J = 0.065, tf = 3.79 ms. c n(t) at Ωn = 2J/3 with Γ/J = 0.065 located in (b) by the diamond shape. d n(tf, Ω) around the five-photon resonance Ωn = 2J/5. Red for Γ/J = 0.07, tf = 4.64 ms; Blue for Γ/J = 0.055, tf = 4.74 ms. e n(t) at Ωn = 2J/5 with Γ/J = 0.055 located in (d) by the circle shape. The initial phase ϕ of the square-wave modulation is chosen to anti-synchronize to the RF field with ϕ = π/2, π/4, π/10 for one-, three-, and five-photon resonance respectively. In a, b, and d, all the data and simulation curves have a base line of n = 1, but are vertically shifted for the presentation purpose. The side peaks are due to the finite probe time. In c, e, the data with solid shapes corresponds with the dissipation on, the data with empty shapes are with the dissipation off, and the pink curves are the exponential fitting. For all figures, the solid curves are numerical simulations without free parameter. The error bars represents the standard deviation of the measurements Comparing with the Hermitian system where the multiphoton resonance is difficult to be observed at the weak driving limit, in a non-Hermitian system, the time-periodic dissipation significantly broadens the width of the PTSB phase so that the multiphoton resonance could be observed clearly with the weak dissipation. The widths of the three-(Fig. 2b), five- (Fig. 2d) photon resonances, agree with the theoretical phase diagram very well. At the exact resonant frequencies, the exponentially increase of n(t) with a very small dissipation strength are recorded in Fig. 2c (three-photon) and Fig. 2e (five-photon) manifesting the PTSB phase. Multiphoton resonances with time-periodic coupling The phase diagram of time-periodic coupling is studied by modulating the coupling between the two-spin states. The resonance widths of the PTSB phases are given by $$\delta {\mathrm{\Omega }}_{\mathrm{c}}({\mathrm{\Gamma }},{\mathrm{\Omega }}_{\mathrm{n}}) = {\mathrm{\Gamma }}\frac{{{\mathrm{\Omega }}_{\mathrm{n}}}}{{2J}},$$ where Ωn = 2J/n is the resonant peak with the even integer n = 2, 4, 6 .... Eq. (3) indicates that the PTSB phases of the time-periodic coupling have the wider width than that of time-periodic dissipation, which scales with the multiphoton index number n as 1/n instead of 1/n2 for time-periodic dissipation [See Supplementary Note 2]. The first four multiphoton resonances are shown in Fig. 3a, where the widths of the PTSB phases increase with dissipation. The increasing of n(t) at the resonance frequencies are fitted by the exponential curves in Fig. 3b to verify the PTSB phase. Detecting multiphoton resonances with time-periodic coupling. a The top frame is the phase diagram. n(tf, Ω) are shown below. From the second top frame to the bottom one, Γ/J = 0.21, 0.16, 0.095, tf = 2.70, 3.70,6.48 ms respectively. The side peaks are mainly due to the finite probe time. b n(t) with Γ/J = 0.095 at the resonant peak Ωn = J, J/2, J/3 from the top to bottom frame. The data with solid shapes corresponds with the dissipation on, the data with empty shapes are with the dissipation off. For all figures, solid curves are the numerical simulation without free parameters. The initial phases of the square-wave modulation are set to zero. The error bars represent the standard deviation of the measurements It is interesting to find that, the resonant peaks Ωn = 2J/n of the time-periodic dissipation have odd integers n = 1, 3, 5 ..., but the time-periodic coupling ones have even integers n = 2, 4, 6 .... The odd or even rule can be explained by a simple picture. For example, in the time-periodic coupling case of the weak loss limit, all atoms are initially in the lossless (up) state and the coupling is turned on for nπ/(2J). If n is even, then the coupling is on exactly for the time of multiple 2π Rabi pulses such that all atoms are back to the up state, and will remain in this state for the next half cycle of coupling-off. During this half cycle, no atoms are lost so that the scaled total atom number increase because the scaling assumes equal loss in both spin states. Overall, the system spends more time in the lossless (up) state when n is even numbers. This is not the case for n to be odd, where on average the system spends the same amount of time in both up and down states because the coupling is turned on for the odd numbers of π pulse. The similar picture is also applied to the time-periodic dissipation case, where the atom loss is turned on for a time duration of nπ/(2J) with n being odd numbers. This amount of time is odd numbers of π Rabi pulse. With the proper choice of phase, the atom loss can only present when the majority of the atoms are in the lossless (loss) state so that the scaled total atom number increase (decrease) exponentially. Either increasing or decreasing depends on the phase of the dissipation, which corresponds the two eigenstates in the PTSB phase. In the experiments, we usually optimize the phase of the time-periodic modulation to obtain the strongest signal, resulting from ensuring the largest overlap between the initial state and the slowly decaying eigenmode of the corresponding Floquet Hamiltonian. For more general initial states, such as a balanced mixture of up and down state, as long as there is a nonzero overlap between the initial state and the slow mode, the \({\cal P}{\cal T}\)-symmetry-breaking signatures of the slow-decaying will be visible in the long-time limit. The laser-cooled ultracold atoms provide a clean and well controllable platform for studying \({\cal P}{\cal T}\)-symmetric Hamiltonians. Previously, phase transitions observed in cold atom systems are usually driven by tunable interparticle interactions and, in principle, occur only in the thermodynamic limit. However, \({\cal P}{\cal T}\)-symmetric breaking transitions, in contrast, can occur in a single two-level system with localized loss. The fate of the former transitions in the presence of such a loss has not been fully understood, as is the fate of latter transition in the presence of interparticle interactions. Investigating the interplay between these two classes of transitions will require quantum simulators with tunable interparticle interactions and engineered state-dependent dissipation, both of which can be realized with certain species of ultracold atoms, such as fermionic 6Li atoms used in this experiment. As a starting point of this route, we apply state-dependent dissipation to ultracold 6Li atoms to study \({\cal P}{\cal T}\)-transitions in a two-level system. With the advantages of modulating the resonant optical and RF field as the versatile tools for time-periodic driving, we could manipulate the atom loss as well as the coupling of spin levels, and experimentally map both time-periodic dissipation and coupling. The phase diagram of the static dissipation and time-periodic dissipation (coupling) are explored by tracing the time-evolution of the atoms. While the single exceptional point under static dissipation is determined as usual, our results verify remarkably rich phase diagrams with multiple Floquet \({\cal P}{\cal T}\)-transitions associated to time-periodic driving. It is shown that the PTSB phases can be induced by judiciously selected temporal profiles of state-dependent dissipation or coupling with vanishingly small strength of the dissipation. The multiphoton resonant structure of \({\cal P}{\cal T}\)-transitions are demonstrated. Such Floquet method thus provide an experimental platform to study time-dependent \({\cal P}{\cal T}\)-symmetric Hamiltonians. Our system has potential to be extended to more complex situations: one is to study the topological phenomena associated to non-Hermitian Hamiltonian and the other is to explore an interacting system with a vanishingly small, time-modulated dissipation. For the formal one, if we use the unresonant RF pulses to couple the spin levels, a detuning term will appear in the diagonal part of the Hamiltonian, and we can adiabatically encircle the exceptional points by changing the detuning and dissipation simultaneously to observe the topological phenomena associated to the non-Hermitian systems19,25,32,33,34,35,36,37,38. For the latter one, the interplay between the \({\cal P}{\cal T}\)-transition and the BEC-BCS (Bose–Einstein condensate to Bardeen–Cooper–Schrieffer pairing) crossover can be investigated by sweeping the ultracold Fermi gas from the noninteracting limit (presented here) to the unitary, strongly-interacting limit39. This approach, where a single-particle, state-dependent loss is used in conjunction with strong interparticle interactions, provides exciting opportunities to explore physical phenomena in open many-body quantum systems. Experimental system We prepare a dissipative two-level system with a noninteracting Fermi gas. 6Li atoms are prepared in the two lowest hyperfine states, |↑〉 ≡ |F = 1/2, mF = 1/2〉 and |↓〉 ≡ |F = 1/2, mF = −1/2〉, in a magneto-optical trap. The precooled atoms are then transferred into a crossed-beam optical dipole trap made by a 100 Watt fiber laser. The bias magnetic field is swept to 330 G to implement an evaporative cooling26. The trap potential is lowered to generate a final trap depth of 2.2 μK. In order to null the interaction between the two hyperfine states, the magnetic field is fast swept to 527.3 G, where the s-wave scattering length of the |↑〉 and |↓〉 states is zero27. The lifetime of the noninteracting Fermi gas is about 20 s, which is three orders of magnitude longer than our typical experimental time. So when the dissipative optical field is absent, this noninteracting Fermi gas can be treated as a closed, two-level quantum system. To prepare a single component Fermi gas in the |↑〉 state as the initial state, we apply a 5 ms optical pulse with −2π × 30 MHz detuning from the |↓〉 → 2P3/2 transition to blow away atoms in the |↓〉 state. We typically have about N = 2.0 × 105 atoms in a pure |↑〉 state at temperature T ≈ 0.8 μK and T/TF ≈ 0.5 with TF is the Fermi temperature. To generate Rabi oscillation between the two-spin states, we couple them via an RF field with frequency ω and coupling strength J. An optical beam resonant with the |↓〉 → 2P3/2 transition is used to create the number dissipation (atom loss) in the |↓〉 state. The resonant-photon recoil energy of 3.5 μK is ~50% larger than the trap depth, so the atom that absorb a photon will leave the trap quickly, resulting a state-dependent atom loss. The RF coupling strength J is measured in the absence of the dissipative optical field, while the atom-number loss rate 2Γ is measured in the absence of the RF coupling. Figure 4a shows the Rabi oscillation with Rabi frequency 2J. Figure 4b shows the atom numbers \(n_ \downarrow ^\prime (t) = n_ \downarrow ^\prime (0){\mathrm{exp}}( - 2{\mathrm{\Gamma }}t)\) with a constant dissipative optical field that only couples the |↓〉 state to the continuum. These measurements are used to calibrate the values of J and Γ for the dissipative two-state Rabi system. Characterization of a dissipative Rabi system. Symbols are experimental data. Solid lines are theoretical fits. a In the presence of the RF field but without the dissipative optical field, atom numbers \(n_\sigma ^\prime (t)\) show Rabi oscillations with a Rabi frequency of 2J = (2π) × 2.15 kHz. b In the presence of an optical field resonant with |↓〉 but without the RF field, \(n_ \downarrow ^\prime (t)\) shows an exponential decay: Γ = 0.30J (red squares), Γ = 0.98J (black triangles), Γ = 2.35J (green diamonds). \(n_ \uparrow ^\prime (t)\) remains almost constant during the experimental time. Blue circles show \(n_ \uparrow ^\prime (t)\) when Γ = 0.30J The dissipative two-state system is described by a non-Hermitian Hamiltonian (ħ = 1) $$H = + \frac{{\omega _0}}{2}\sigma _{\mathrm{z}} - i\frac{{{\mathrm{\Gamma }}(t)}}{2}(1 - \sigma _{\mathrm{z}}) + 2J\,{\mathrm{cos}}(\omega t)\sigma _{\mathrm{x}},$$ where ω0 = 2π × 75.6 MHz is the hyperfine splitting at 527.3 G. When the RF driving is close to the resonance, that is ω ≈ ω0, with the rotating wave approximation in the interacting picture, H(t) = −iΓ(t)/2 + HPT(t), where the non-Hermitian, \({\cal P}{\cal T}\)-symmetric Hamiltonian is given by (ħ = 1) \(H_{{\mathrm{PT}}} = J\sigma _{\mathrm{x}} + i{\mathrm{\Gamma }}(t)\sigma _{\mathrm{z}}{\mathrm{/}}2 = {\cal P}{\cal T}H_{{\mathrm{PT}}}{\cal P}{\cal T}\), where \({\cal P} = \sigma _{\mathrm{x}}\) and \({\cal T} = \ast\) denotes complex conjugation operation. Starting with an initial state |ψ(0)〉, the decaying atom numbers for the two states are given by \(n_\sigma ^\prime (t) \equiv \left| {\left\langle \sigma \right|G\prime (t)\left| {\psi (0)} \right\rangle } \right|^2\) where $$G\prime (t) = T\,{\mathrm{exp}}\left( { - i{\int}_0^t \,H\prime (t\prime )dt\prime } \right),$$ is the non-unitary time-evolution operator obtained via the time-ordered product. It is also useful to define scaled atom number nσ(t) = |〈σ|GPT(t)|ψ(0)〉|2 where GPT(t) is the corresponding time-evolution operator for HPT(t). It follows that \(n_\sigma (t) = n_\sigma ^\prime (t) \times {\mathrm{exp}}\left( {{\int}_0^t {\kern 1pt} {\mathrm{\Gamma }}(t\prime )dt\prime {\mathrm{/}}2} \right)\). In a \({\cal P}{\cal T}\)-symmetric system, the \({\cal P}{\cal T}\)-symmetric phase is signaled by non-decaying, oscillatory nσ(t) and the \({\cal P}{\cal T}\)-broken phase is signaled by an exponentially increasing nσ(t). The data that support the findings of this study are available from the corresponding author upon reasonable request. Bender, C. M. & Boettcher, S. Real spectra in non-Hermitian Hamiltonians having symmetry. Phys. Rev. Lett. 80, 5243–5246 (1998). ADS MathSciNet CAS Article Google Scholar El-Ganainy, R., Makris, K. G., Christodoulides, D. N. & Musslimani, Z. H. Theory of coupled optical -symmetric structures. Opt. Lett. 32, 2632–2634 (2007). ADS CAS Article Google Scholar Bender, C. M. symmetry in quantum physics: from a mathematical curiosity to optical experiments. Europhys. News 47, 17–20 (2016). Lin, Z. et al. Unidirectional invisibility induced by -symmetric periodic structures. Phys. Rev. Lett. 106, 213901 (2011). Feng, L. et al. Nonreciprocal light propagation in a silicon photonic circuit. Science 333, 729–733 (2011). Feng, L. et al. Experimental demonstration of a unidirectional reflectionless parity-time metamaterial at optical frequencies. Nat. Mater. 12, 108–113 (2013). Peng, B. et al. Loss-induced suppression and revival of lasing. Science 346, 328–332 (2014). Hodaei, H., Miri, M.-A., Heinrich, M., Christodoulides, D. N. & Khajavikhan, M. Parity-time–symmetric microring lasers. Science 346, 975–978 (2014). Feng, L., Wong, Z. J., Ma, R.-M., Wang, Y. & Zhang, X. Single-mode laser by parity-time symmetry breaking. Science 346, 972–975 (2014). Kato, T. Perturbation theory for linear operators. (Springer-Verlag, Berlin, New York, 1966). Heiss, W. D. Exceptional points of non-Hermitian operators. J. Phys. A: Math. Gen. 37, 2455 (2004). ADS MathSciNet Article Google Scholar Bender, C. M. Making sense of non-hermitian hamiltonians. Rept. Prog. Phys. 70, 947 (2007). Bender, C. M., Berntson, B. K., Parker, D. & Samuel, E. Observation of phase transition in a simple mechanical system. Am. J. Phys. 81, 173–179 (2013). Rüter, C. E. et al. Observation of parity-time symmetry in optics. Nat. Phys. 6, 192–195 (2010). Regensburger, A. et al. Parity-time synthetic photonic lattices. Nature 488, 167–171 (2012). Lee, S.-B. et al. Observation of an exceptional point in a chaotic optical microcavity. Phys. Rev. Lett. 103, 134101 (2009). Dembowski, C. et al. Experimental observation of the topological structure of exceptional points. Phys. Rev. Lett. 86, 787 (2001). Brandstetter, M. et al. Reversing the pump dependence of a laser at an exceptional point. Nat. Commun. 5, 192–195 (2014). Xu, H., Mason, D., Jiang, L. & Harris, J. G. E. Topological energy transfer in an optomechanical system with exceptional points. Nature 537, 80 (2016). Lee, T. E. & Joglekar, Y. N. -symmetric Rabi model: perturbation theory. Phys. Rev. A. 92, 042103 (2015). Guo, A. et al. Observation of -symmetry breaking in complex optical potentials. Phys. Rev. Lett. 103, 093902 (2009). Xiao, L. et al. Observation of topological edge states in parity-time-symmetric quantum walks. Nat. Phys. 13, 1117 (2017). Joglekar, Y. N., Marathe, R., Durganandini, P. & Pathak, R. K. spectroscopy of the Rabi problem. Phys. Rev. A. 90, 040101 (2014). Ben-Aryeh, Y., Mann, A. & Yaakov, I. Rabi oscillations in a two-level atomic system with a pseudo-hermitian hamiltonian. J. Phys. Math. Theor. 37, 12059 (2004). ADS MathSciNet MATH Google Scholar Doppler, J. et al. Dynamically encircling an exceptional point for asymmetric mode switching. Nature 537, 76 (2016). Li, J., Liu, J., Xu, W., deMelo, L. & Luo, L. Parametric cooling of a degenerate Fermi gas in an optical trap. Phys. Rev. A. 93, 041401(R) (2016). O'Hara, K. M. et al. Measurement of the zero crossing in a Feshbach resonance of fermionic 6Li. Phys. Rev. A. 66, 041401(R) (2002). Streed, E. W. et al. Continuous and pulsed quantum zeno effect. Phys. Rev. Lett. 97, 260402 (2006). Fischer, M. C., Gutierrez-Medina, B. & Raizen, M. G. Observation of the quantum zeno and anti-zeno effects in an unstable system. Phys. Rev. Lett. 87, 040402 (2001). Zhu, B. et al. Suppressing the loss of ultracold molecules via the continuous quantum zeno effect. Phys. Rev. Lett. 112, 070404 (2014). Patil, Y. S., Chakram, S. & Vengalattore, M. Measurement-induced localization of an ultracold lattice gas. Phys. Rev. Lett. 115, 140402 (2015). Mailybaev, A. A., Kirillov, O. N. & Seyranian, A. P. Geometric phase around exceptional points. Phys. Rev. A. 73, 014104 (2005). Uzdin, R., Mailybaev, A. & Moiseyev, N. On the observability and asymmetry of adiabatic state flips generated by exceptional points. J. Phys. Math. Theor. 44, 435302 (2011). Wu, J.-H., Artoni, M. & Rocca, G. C. L. Non-hermitian degeneracies and unidirectional reflectionless atomic lattices. Phys. Rev. Lett. 113, 123004 (2014). Gao, T. et al. Observation of non-hermitian degeneracies in a chaotic exciton-polariton billiard. Nature 526, 554 (2015). Milburn, T. J. et al. General description of quasiadiabatic dynamical phenomena near exceptional points. Phys. Rev. A. 92, 052124 (2015). Hassan, A. U., Zhen, B., Soljacic, M., Khajavikhan, M. & Christodoulides, D. N. Dynamically encircling exceptional points: exact evolution and polarization state conversion. Phys. Rev. Lett. 118, 093002 (2017). Yoon, J. W. et al. Time-asymmetric loop around an exceptional point over the full optical communications band. Nature 562, 86 (2018). Luo, L., Clancy, B., Joseph, J., Kinast, J. & Thomas, J. E. Measurement of the entropy and critical temperature of a strongly interacting Fermi gas. Phys. Rev. Lett. 98, 080402 (2007). L.L. is a member of the Indiana University Center for Spacetime Symmetries (IUCSS). L.L. received supports from National Natural Science Foundation of China under Grant No. 11774436, Sun Yat-sen University Discipline Construction Fund, Sun Yat-sen University Three Major Construction Fund, Indiana University Collaborative Research Grant. Y.N.J. received NSF grant no. DMR-1054020. J.Li. received supports from National Natural Science Foundation of China under Grant No. 11804406. School of Physics and Astronomy, Sun Yat-sen University, 519082, Zhuhai, China Jiaming Li, Leonardo de Melo & Le Luo Department of Physics, Indiana University Purdue University Indianapolis (IUPUI), Indianapolis, IN, 46202, USA Andrew K. Harter, Ji Liu, Leonardo de Melo, Yogesh N. Joglekar & Le Luo Jiaming Li Andrew K. Harter Ji Liu Leonardo de Melo Yogesh N. Joglekar Le Luo L.L. and Y.N.J. conceived the idea and supervised the study. J.Li. and L.d.m. set up experiments and performed measurements. L.L. designed and supervised the experiments. A.K.H., J.Li. and Y.N.J. carried out theoretical modeling. J.Li. and J.Liu. analyzed the data. J.Li., L.L. and Y.N.J. contributed to writing the manuscript. Correspondence to Yogesh N. Joglekar or Le Luo. Journal Peer Review Information: Nature Communications would like to thank Gregor Jotzu and the other anonymous reviewers for their contribution to the peer review of this work. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Li, J., Harter, A.K., Liu, J. et al. Observation of parity-time symmetry breaking transitions in a dissipative Floquet system of ultracold atoms. Nat Commun 10, 855 (2019). https://doi.org/10.1038/s41467-019-08596-1 Received: 27 February 2017 Experimental simulation of the parity-time symmetric dynamics using photonic qubits Wei-Chao Gao , Chao Zheng , Lu Liu , Tie-Jun Wang & Chuan Wang Optics Express (2021) Knots and Non-Hermitian Bloch Bands Haiping Hu & Erhai Zhao Physical Review Letters (2021) Floquet Spectrum and Dynamics for Non-Hermitian Floquet One-Dimension Lattice Model Ya-Nan Zhang , Shuang Xu , Hao-Di Liu & Xue-Xi Yi International Journal of Theoretical Physics (2021) Topological Origin of Non-Hermitian Skin Effects Nobuyuki Okuma , Kohei Kawabata , Ken Shiozaki & Masatoshi Sato Observation of information flow in the anti-𝒫𝒯-symmetric system with nuclear spins Jingwei Wen , Guoqing Qin , Shijie Wei , Xiangyu Kong , Tao Xin & Guilu Long npj Quantum Information (2020) Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online)
CommonCrawl
1: Descriptive Statistics and the Normal Distribution Book: Natural Resources Biometrics (Kiernan) { "1.01:_Descriptive_Statistics" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "1.02:_Probability_Distribution" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } { "00:_Front_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "01:_Descriptive_Statistics_and_the_Normal_Distribution" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "02:_Sampling_Distributions_and_Confidence_Intervals" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "03:_Hypothesis_Testing" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "04:_Inferences_about_the_Differences_of_Two_Populations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "05:_One-Way_Analysis_of_Variance" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "06:_Two-way_Analysis_of_Variance" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "07:_Correlation_and_Simple_Linear_Regression" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "08:_Multiple_Linear_Regression" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "09:_Modeling_Growth_Yield_and_Site_Index" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "10:_Quantitative_Measures_of_Diversity_Site_Similarity_and_Habitat_Suitability" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "11:_Biometric_Labs" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "zz:_Back_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } Sat, 01 May 2021 21:57:59 GMT 1.1: Descriptive Statistics [ "article:topic", "range", "mode", "descriptive statistics", "parameters (definition)", "authorname:dkiernan", "showtoc:no", "license:ccbyncsa", "program:opensuny", "licenseversion:30", "source@https://milneopentextbooks.org/natural-resources-biometrics" ] https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FBookshelves%2FApplied_Statistics%2FBook%253A_Natural_Resources_Biometrics_(Kiernan)%2F01%253A_Descriptive_Statistics_and_the_Normal_Distribution%2F1.01%253A_Descriptive_Statistics Diane Kiernan SUNY College of Environmental Science and Forestry via OpenSUNY Descriptive Measures Measures of Center Measures of Dispersion Standard Error of Mean Coefficient of Variation Basic Statistics Example using Excel and Minitab Software Graphic Representation Bar Charts and Histograms A population is the group to be studied, and population data is a collection of all elements in the population. For example: All the fish in Long Lake. All the lakes in the Adirondack Park. All the grizzly bears in Yellowstone National Park. A sample is a subset of data drawn from the population of interest. For example: 100 fish randomly sampled from Long Lake. 25 lakes randomly selected from the Adirondack Park. 60 grizzly bears with a home range in Yellowstone National Park. Figure 1. Using sample statistics to estimate population parameters. Populations are characterized by descriptive measures called parameters. Inferences about parameters are based on sample statistics. For example, the population mean (\(μ\)) is estimated by the sample mean (\( \bar x\) ). The population variance ( \(\sigma ^2\)) is estimated by the sample variance (\(s^2\)). Variables are the characteristics we are interested in. For example: The length of fish in Long Lake. The pH of lakes in the Adirondack Park. The weight of grizzly bears in Yellowstone National Park. Variables are divided into two major groups: qualitative and quantitative. Qualitative variables have values that are attributes or categories. Mathematical operations cannot be applied to qualitative variables. Examples of qualitative variables are gender, race, and petal color. Quantitative variables have values that are typically numeric, such as measurements. Mathematical operations can be applied to these data. Examples of quantitative variables are age, height, and length. Quantitative variables can be broken down further into two more categories: discrete and continuous variables. Discrete variables have a finite or countable number of possible values. Think of discrete variables as "hens". Hens can lay 1 egg, or 2 eggs, or 13 eggs… There are a limited, definable number of values that the variable could take on. Continuous variables have an infinite number of possible values. Think of continuous variables as "cows". Cows can give 4.6713245 gallons of milk, or 7.0918754 gallons of milk, or 13.272698 gallons of milk … There are an almost infinite number of values that a continuous variable could take on. Example \(\PageIndex{1}\): Is the variable qualitative or quantitative? (qualitative quantitative, quantitative, qualitative) Descriptive measures of populations are called parameters and are typically written using Greek letters. The population mean is \(\mu\) (mu). The population variance is \(\sigma ^2\) (sigma squared) and population standard deviation is \(\sigma \) (sigma). Descriptive measures of samples are called statistics and are typically written using Roman letters. The sample mean is \(\bar x\) (x-bar). The sample variance is \(s^2\) and the sample standard deviation is \(s\). Sample statistics are used to estimate unknown population parameters. In this section, we will examine descriptive statistics in terms of measures of center and measures of dispersion. These descriptive statistics help us to identify the center and spread of the data. The arithmetic mean of a variable, often called the average, is computed by adding up all the values and dividing by the total number of values. The population mean is represented by the Greek letter \(\mu\) (mu). The sample mean is represented by \(\bar x\) (x-bar). The sample mean is usually the best, unbiased estimate of the population mean. However, the mean is influenced by extreme values (outliers) and may not be the best measure of center with strongly skewed data. The following equations compute the population mean and sample mean. $$\mu = \frac {\sum x_i}{N}\] $$\bar x = \frac {\sum x_i}{n}\] where \(x_i\) is an element in the data set, \(N\) is the number of elements in the population, and \(n\) is the number of elements in the sample data set. Example \(\PageIndex{2}\): mean Find the mean for the following sample data set: $$\bar x = \frac {6.4+5.2+7.9+3.4} {4} = 5.725\] The median of a variable is the middle value of the data set when the data are sorted in order from least to greatest. It splits the data into two equal halves with 50% of the data below the median and 50% above the median. The median is resistant to the influence of outliers, and may be a better measure of center with strongly skewed data. The calculation of the median depends on the number of observations in the data set. To calculate the median with an odd number of values (n is odd), first sort the data from smallest to largest. Example \(\PageIndex{3}\): Calculating Median with Odd number of values Find the median for the following sample data set: $$23, 27, 29, 31, 35, 39, 40, 42, 44, 47, 51\] The median is 39. It is the middle value that separates the lower 50% of the data from the upper 50% of the data. To calculate the median with an even number of values (n is even), first sort the data from smallest to largest and take the average of the two middle values. Example \(\PageIndex{4}\): Calculating Median with even number of values $$23, 27, 29, 31, 35, 39, 40, 42, 44, 47\] $$M = \frac {35+39}{2} = 37\] The mode is the most frequently occurring value and is commonly used with qualitative data as the values are categorical. Categorical data cannot be added, subtracted, multiplied or divided, so the mean and median cannot be computed. The mode is less commonly used with quantitative data as a measure of center. Sometimes each value occurs only once and the mode will not be meaningful. Understanding the relationship between the mean and median is important. It gives us insight into the distribution of the variable. For example, if the distribution is skewed right (positively skewed), the mean will increase to account for the few larger observations that pull the distribution to the right. The median will be less affected by these extreme large values, so in this situation, the mean will be larger than the median. In a symmetric distribution, the mean, median, and mode will all be similar in value. If the distribution is skewed left (negatively skewed), the mean will decrease to account for the few smaller observations that pull the distribution to the left. Again, the median will be less affected by these extreme small observations, and in this situation, the mean will be less than the median. Figure 2. Illustration of skewed and symmetric distributions. Measures of center look at the average or middle values of a data set. Measures of dispersion look at the spread or variation of the data. Variation refers to the amount that the values vary among themselves. Values in a data set that are relatively close to each other have lower measures of variation. Values that are spread farther apart have higher measures of variation. Examine the two histograms below. Both groups have the same mean weight, but the values of Group A are more spread out compared to the values in Group B. Both groups have an average weight of 267 lb. but the weights of Group A are more variable. Figure 3. Histograms of Group A and Group B. This section will examine five measures of dispersion: range, variance, standard deviation, standard error, and coefficient of variation. The range of a variable is the largest value minus the smallest value. It is the simplest measure and uses only these two values in a quantitative data set. Example \(\PageIndex{5}\): Computing Range Find the range for the given data set. $$12, 29, 32, 34, 38, 49, 57\] $$Range = 57 – 12 = 45\] The variance uses the difference between each value and its arithmetic mean. The differences are squared to deal with positive and negative differences. The sample variance (\(s^2\)) is an unbiased estimator of the population variance (\(\sigma ^2\)), with n-1 degrees of freedom. Degrees of freedom: In general, the degrees of freedom for an estimate is equal to the number of values minus the number of parameters estimated en route to the estimate in question. The sample variance is unbiased due to the difference in the denominator. If we used "n" in the denominator instead of "n – 1", we would consistently underestimate the true population variance. To correct this bias, the denominator is modified to "n – 1". Definition: population variance $$\sigma ^2 = \frac {\sum (x_i-\mu)^2} {N}\] Definition: sample variance $$ s^2 = \frac {\sum (x_i- \bar x)^2}{n-1} = \frac {\sum x_i^2 - \frac {(\sum x_i)^2}{n}}{n-1} \label{samplevar}\] Example \(\PageIndex{6}\): Computing Variance Compute the variance of the sample data: 3, 5, 7. The sample mean (\( \bar x\)) is 5. Then use Equation \ref{samplevar} $$s^2 = \frac {(3-5)^2 +(5-5)^2 + (7-5) ^2} {3-1} = 4\] The standard deviation is the square root of the variance (both population and sample). While the sample variance is the positive, unbiased estimator for the population variance, the units for the variance are squared. The standard deviation is a common method for numerically describing the distribution of a variable. The population standard deviation is σ (sigma) and sample standard deviation is s. Definition:SAMPLE STANDARD DEVIATION $$s = \sqrt {s^2}\] Definition:POPULATION STANDARD DEVIATION $$\sigma = \sqrt {\sigma ^2}\] Compute the standard deviation of the sample data: 3, 5, 7 with a sample mean of 5. The sample mean (\(\bar x\)) is 5, using the definition of standard deviation $$s = \sqrt {\frac {(3-5)^2+(5-5)^2+(7-5)^2} {3-1}} = \sqrt {4} = 2\] Commonly, we use the sample mean x̄ to estimate the population mean μ. For example, if we want to estimate the heights of eighty-year-old cherry trees, we can proceed as follows: Randomly select 100 trees Compute the sample mean of the 100 heights Use that as our estimate We want to use this sample mean to estimate the true but unknown population mean. But our sample of 100 trees is just one of many possible samples (of the same size) that could have been randomly selected. Imagine if we take a series of different random samples from the same population and all the same size: Sample 1—we compute sample mean \(\bar x\) Each time we sample, we may get a different result as we are using a different subset of data to compute the sample mean. This shows us that the sample mean is a random variable! The sample mean (\(\bar x\)) is a random variable with its own probability distribution called the sampling distribution of the sample mean. The distribution of the sample mean will have a mean equal to µ and a standard deviation equal to \(\frac {s} {\sqrt {n}}\) The standard error \(\frac {s} {\sqrt {n}}\) is the standard deviation of all possible sample means. In reality, we would only take one sample, but we need to understand and quantify the sample to sample variability that occurs in the sampling process. The standard error is the standard deviation of the sample means and can be expressed in different ways. $$s_{\bar x}=\sqrt {\frac {s^2}{n}}=\frac {s}{\sqrt {n}}\] \(s^2\) is the sample variance and s is the sample standard deviation Describe the distribution of the sample mean. A population of fish has weights that are normally distributed with µ = 8 lb. and s = 2.6 lb. If you take a sample of size n=6, the sample mean will have a normal distribution with a mean of 8 and a standard deviation (standard error) of \(\frac {2.6}{\sqrt {6}}\)= 1.061 lb. If you increase the sample size to 10, the sample mean will be normally distributed with a mean of 8 lb. and a standard deviation (standard error) of \(\frac {2.6}{\sqrt {10}}\) = 0.822 lb. Notice how the standard error decreases as the sample size increases. The Central Limit Theorem (CLT) states that the sampling distribution of the sample means will approach a normal distribution as the sample size increases. If we do not have a normal distribution, or know nothing about our distribution of our random variable, the CLT tells us that the distribution of the x̄'s will become normal as nincreases. How large does n have to be? A general rule of thumb tells us that n ≥ 30. The Central Limit Theorem tells us that regardless of the shape of our population, the sampling distribution of the sample mean will be normal as the sample size increases. To compare standard deviations between different populations or samples is difficult because the standard deviation depends on units of measure. The coefficient of variation expresses the standard deviation as a percentage of the sample or population mean. It is a unitless measure. Definition: CV of Population $$CV=\frac {\sigma}{\mu} \times 100\] Definition: cv of sample $$CV=\frac {s}{\bar x} \times 100\] Fisheries biologists were studying the length and weight of Pacific salmon. They took a random sample and computed the mean and standard deviation for length and weight (given below). While the standard deviations are similar, the differences in units between lengths and weights make it difficult to compare the variability. Computing the coefficient of variation for each variable allows the biologists to determine which variable has the greater standard deviation. Sample mean Sample standard deviation Length 63 cm 19.97 cm Weight 37.6 kg 19.39 kg There is greater variability in Pacific salmon weight compared to length. Variability is described in many different ways. Standard deviation measures point to point variability within a sample, i.e., variation among individual sampling units. Coefficient of variation also measures point to point variability but on a relative basis (relative to the mean), and is not influenced by measurement units. Standard error measures the sample to sample variability, i.e. variation among repeated samples in the sampling process. Typically, we only have one sample and standard error allows us to quantify the uncertainty in our sampling process. Consider the following tally from 11 sample plots on Heiburg Forest, where Xi is the number of downed logs per acre. Compute basic statistics for the sample plots. Table 1. Sample data on number of downed logs per acre from Heiburg Forest. (1) Sample mean: (2) Median = 35 (3) Variance: (4) Standard deviation: (5) Range: 55 – 5 = 50 (6) Coefficient of variation: (7) Standard error of the mean: Open Minitab and enter data in the spreadsheet. Select STAT>Descriptive stats and check all statistics required. Descriptive Statistics: Data N* SE Mean CoefVar IQR Open up Excel and enter the data in the first column of the spreadsheet. Select DATA>Data Analysis>Descriptive Statistics. For the Input Range, select data in column A. Check "Labels in First Row" and "Summary Statistics". Also check "Output Range" and select location for output. Sample Variance Data organization and summarization can be done graphically, as well as numerically. Tables and graphs allow for a quick overview of the information collected and support the presentation of the data used in the project. While there are a multitude of available graphics, this chapter will focus on a specific few commonly used tools. Pie charts are a good visual tool allowing the reader to quickly see the relationship between categories. It is important to clearly label each category, and adding the frequency or relative frequency is often helpful. However, too many categories can be confusing. Be careful of putting too much information in a pie chart. The first pie chart gives a clear idea of the representation of fish types relative to the whole sample. The second pie chart is more difficult to interpret, with too many categories. It is important to select the best graphic when presenting the information to the reader. Figure 4. Comparison of pie charts. Bar charts graphically describe the distribution of a qualitative variable (fish type) while histograms describe the distribution of a quantitative variable discrete or continuous variables (bear weight). Figure 5. Comparison of a bar chart for qualitative data and a histogram for quantitative data. In both cases, the bars' equal width and the y-axis are clearly defined. With qualitative data, each category is represented by a specific bar. With continuous data, lower and upper class limits must be defined with equal class widths. There should be no gaps between classes and each observation should fall into one, and only one, class. Boxplots use the 5-number summary (minimum and maximum values with the three quartiles) to illustrate the center, spread, and distribution of your data. When paired with histograms, they give an excellent description, both numerically and graphically, of the data. With symmetric data, the distribution is bell-shaped and somewhat symmetric. In the boxplot, we see that Q1 and Q3 are approximately equidistant from the median, as are the minimum and maximum values. Also, both whiskers (lines extending from the boxes) are approximately equal in length. Figure 6. A histogram and boxplot of a normal distribution. With skewed left distributions, we see that the histogram looks "pulled" to the left. In the boxplot, Q1 is farther away from the median as are the minimum values, and the left whisker is longer than the right whisker. Figure 7. A histogram and boxplot of a skewed left distribution. With skewed right distributions, we see that the histogram looks "pulled" to the right. In the boxplot, Q3 is farther away from the median, as is the maximum value, and the right whisker is longer than the left whisker. Figure 8. A histogram and boxplot of a skewed right distribution. This page titled 1.1: Descriptive Statistics is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Diane Kiernan (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. 1.2: Probability Distribution OpenSUNY parameters (definition) source@https://milneopentextbooks.org/natural-resources-biometrics
CommonCrawl
Politics and International Relations (1) Proceedings of the International Astronomical Union (3) European Journal of Anaesthesiology (2) Geological Magazine (2) Journal of Agricultural and Applied Economics (2) Development and Psychopathology (1) Journal of Plasma Physics (1) Materials Research Society Internet Journal of Nitride Semiconductor Research (1) Mathematical Modelling of Natural Phenomena (1) Microscopy and Microanalysis (1) Transactions of the International Astronomical Union (1) International Astronomical Union (4) Brazilian Society for Microscopy and Microanalysis (SBMM) (1) Royal College of Speech and Language Therapists (1) Glucocorticoid receptor signaling in leukocytes after early life adversity Martha M. C. Elwenspoek, Xenia Hengesch, Fleur A. D. Leenen, Krystel Sias, Sara Beatriz Fernandes, Violetta K. Schaan, Sophie B. Mériaux, Stephanie Schmitz, Fanny Bonnemberger, Hartmut Schächinger, Claus Vögele, Claude P. Muller, Jonathan D. Turner Journal: Development and Psychopathology , First View Published online by Cambridge University Press: 13 August 2019, pp. 1-11 Early life adversity (ELA) has been associated with inflammation and immunosenescence, as well as hyporeactivity of the HPA axis. Because the immune system and the HPA axis are tightly intertwined around the glucocorticoid receptor (GR), we examined peripheral GR functionality in the EpiPath cohort among participants who either had been exposed to ELA (separation from parents and/or institutionalization followed by adoption; n = 40) or had been reared by their biological parents (n = 72). Expression of the strict GR target genes FKBP5 and GILZ as well as total and 1F and 1H GR transcripts were similar between groups. Furthermore, there were no differences in GR sensitivity, examined by the effects of dexamethasone on IL6 production in LPS-stimulated whole blood. Although we did not find differences in methylation at the GR 1F exon or promoter region, we identified a region of the GR 1H promoter (CpG 1-9) that showed lower methylation levels in ELA. Our results suggest that peripheral GR signaling was unperturbed in our cohort and the observed immune phenotype does not appear to be secondary to an altered GR response to the perturbed HPA axis and glucocorticoid (GC) profile, although we are limited in our measures of GR activity and time points. Using Micro-Beam Techniques to Infer Meteorite Abundances of the Jurassic C. E. Caplan, G. R. Huss, H. A. Ishii, J. P. Bradley, P. Eschbach, B. Schmitz, K. Nagashima Journal: Microscopy and Microanalysis / Volume 24 / Issue S1 / August 2018 Published online by Cambridge University Press: 01 August 2018, pp. 714-715 Print publication: August 2018 The Wisconsin Plasma Astrophysics Laboratory Experiments Fundamental Plasma Physics C. B. Forest, K. Flanagan, M. Brookhart, M. Clark, C. M. Cooper, V. Désangles, J. Egedal, D. Endrizzi, I. V. Khalzov, H. Li, M. Miesch, J. Milhone, M. Nornberg, J. Olson, E. Peterson, F. Roesler, A. Schekochihin, O. Schmitz, R. Siller, A. Spitkovsky, A. Stemo, J. Wallace, D. Weisberg, E. Zweibel Journal: Journal of Plasma Physics / Volume 81 / Issue 5 / October 2015 Published online by Cambridge University Press: 07 August 2015, 345810501 The Wisconsin Plasma Astrophysics Laboratory (WiPAL) is a flexible user facility designed to study a range of astrophysically relevant plasma processes as well as novel geometries that mimic astrophysical systems. A multi-cusp magnetic bucket constructed from strong samarium cobalt permanent magnets now confines a $10~\text{m}^{3}$ , fully ionized, magnetic-field-free plasma in a spherical geometry. Plasma parameters of $T_{e}\approx 5$ to $20~\text{eV}$ and $n_{e}\approx 10^{11}$ to $5\times 10^{12}~\text{cm}^{-3}$ provide an ideal testbed for a range of astrophysical experiments, including self-exciting dynamos, collisionless magnetic reconnection, jet stability, stellar winds and more. This article describes the capabilities of WiPAL, along with several experiments, in both operating and planning stages, that illustrate the range of possibilities for future users. Positive and Negative Externalities in Agricultural Production: The Case of Adena Springs Ranch Charles B. Moss, Andrew Schmitz Journal: Journal of Agricultural and Applied Economics / Volume 45 / Issue 3 / August 2013 Published online by Cambridge University Press: 26 January 2015, pp. 401-409 View extract Policy analysis is complicated by the myriad of benefits and costs generated by the use of natural resources. This study develops three benefits that must be considered in the granting of a consumptive use permit for water filed by Adena Springs Ranch, east of Ocala, Florida. This ranch is hoping to expand into grass-fat beef; but to do so, it needs additional water for irrigation. Specifically, our analysis considers the potential gain from the ranch, the potential negative effect on existing permit holders and environmental uses of water, and the possible positive value generated by the increased surface flow for other recreational users in eastern Marion County. By Fiona B. Adamson, Kristin M. Bakke, Andrew Bennett, Jeffrey T. Checkel, Stephan Hamberg, Kristian Berg Harpviken, Sarah Kenyon Lischer, Martin Austvoll Nome, Hans Peter Schmitz, Nils B. Weidmann, Elisabeth Jean Wood Edited by Jeffrey T. Checkel, Simon Fraser University, British Columbia Book: Transnational Dynamics of Civil War Published online: 05 February 2013 Print publication: 24 January 2013, pp xi-xii By Mary T. Antonelli, Maria A. Antor, Alfredo R. Arribas, Ron Banister, Donna Beitler, Ellen K. Bergeron, Sergio D. Bergese, Louise Caperelli-White, Corey E. Collins, Karen B. Domino, Charles Fox, Mary Elise Fox, Julie Gayle, Kristi Dorn Hare, Eugenie S. Heitmiller, Bommy Hong, Joseph C. Hung, Philip Kalarickal, Adam M. Kaye, Alan D. Kaye, Jeffrey S. Kelly, Eunhea Kim, Lyubov Kozmenko, Valeriy Kozmenko, Laura Kress, Martin Kubin, Usman Latif, Henry Liu, Todd Liu, Joyce C. Lo, Kai Matthes, Julia Metzner, Rahul Mishra, Debra E. Morrison, Arnab Mukherjee, Heikki E. Nikkanen, Erika G. Puente, Benjamin R. Record, James Riopelle, Brenda Schmitz, David E. Seaver, Patricia M. Sequeira, Theodore Strickland, Heather Trafton, J. Gabriel Tsang, Alberto Uribe, Richard D. Urman, Ghousia Wajida, Emmett Whitaker, Jamie Wingate, Michael Yarborough Edited by Richard D. Urman, Alan D. Kaye Book: Moderate and Deep Sedation in Clinical Practice Print publication: 09 February 2012, pp ix-xi Predictors of 1-year outcomes of major depressive disorder among individuals with a lifetime diagnosis: a population-based study J. L. Wang, S. B. Patten, S. Currie, J. Sareen, N. Schmitz Journal: Psychological Medicine / Volume 42 / Issue 2 / February 2012 Examining predictors of the outcomes of major depressive disorder (MDD) is important for clinical practice and population health. There are few population-based longitudinal studies on this topic. The objectives of this study were to (1) estimate the proportions of persistent and recurrent MDD among those with MDD over 1 year, and (2) identify demographic, socio-economic, workplace psychosocial and clinical factors associated with the outcomes. From a population-based longitudinal study of the working population, participants with a lifetime diagnosis of MDD were selected (n=834). They were classified into two groups: those with and those without current MDD. The proportions of 1-year persistence and recurrence of MDD were estimated. MDD was assessed by the World Health Organization (WHO) Composite International Diagnostic Interview, CIDI-Auto 2.1, by telephone. The proportions of persistent and recurrent MDD in 1 year were 38.5% [95% confidence interval (CI) 31.1–46.5] and 13.3% (95% CI 10.2–17.1) respectively. Long working hours, negative thinking and having co-morbid social phobia were predictive of persistence of MDD. Perceived work–family conflict, the severity of a major depressive episode and symptoms of depressed mood were significantly associated with the recurrence of MDD. Clinical and psychosocial factors are important in the prognosis of MDD. The factors associated with persistence and recurrence of MDD may be different. More large longitudinal studies on this topic are needed so that clinicians may predict potential outcomes based on the clinical profile and provide interventions accordingly. They may also take clinical action to change relevant psychosocial factors to minimize the chance of persistence and/or recurrence of MDD. By Albert P. Aldenkamp, Fizzah Ali BMedSc, Frank M.C. Besag, Penny Blake, Sarah Broicher, Andrea Eugenio Cavanna, Thierry Deonna, Marie-Aline Eden, Alan B. Ettinger, Christoph Helmstaedter, Dale C. Hesdorffer, Hennric Jokeit, Kousuke Kanemoto, Andres M. Kanner, Mike Kerr, Steffi Koch-Stoecker, Ennapadam S. Krishnamoorthy, W. Curt LaFrance, Marco Mula, Jane V. Perr, Bernd Pohlmann-Eden, Eliane Roulet-Perez, Bettina Schmitz, Tanvir Syed, Michael R. Trimble, Juri-Alexander Witt Edited by Michael R. Trimble, Institute of Neurology, London, Bettina Schmitz Book: The Neuropsychiatry of Epilepsy Print publication: 09 June 2011, pp vi-viii Monitoring the growth of microcrystalline silicon deposited by plasma-enhanced chemical vapor deposition using in-situ Raman spectroscopy S. Muthmann, F. Köhler, M. Hülsbeck, M. Meier, A. Mück, R. Schmitz, W. Appenzeller, R. Carius, A. Gordijn Journal: MRS Online Proceedings Library Archive / Volume 1321 / 2011 Published online by Cambridge University Press: 16 August 2011, mrss11-1321-a02-03 A novel setup for Raman measurements under small angles of incidence during the parallel plate plasma enhanced chemical vapor deposition of μc-Si:H films is described. The possible influence of disturbances introduced by the setup on growing films is studied. The substrate heating by the probe beam is investigated and reduced as far as possible. It is shown that with optimized experimental parameters the influence of the in-situ measurements on a growing film can be neglected. With optimized settings, in-situ Raman measurements on the intrinsic layer of a microcrystalline silicon solar cell are carried out with a time resolution of about 40 s corresponding to 20 nm of deposited material during each measurement. Characterization of Green Laser Crystallized GeSi Thin Films Balaji Rangarajan, Ihor Brunets, Peter Oesterlin, Alexey Y. Kovalgin, Jurriaan Schmitz Published online by Cambridge University Press: 20 June 2011, mrss11-1321-a06-04 Green laser crystallization of a-Ge0.85Si0.15 films deposited using Low Pressure Chemical Vapour Deposition is studied. Large grains of 8x2 μm2 size were formed using a location-controlled approach. Characterization is done using Scanning Electron Microscopy, Atomic Force Microscopy, X-Ray Photoelectron Spectroscopy and X-Ray Diffraction. TEM studies of the ternary Ti36 Al62 Nb2 alloy V.S.K. Chakravadhanula, K. Kelm, L. Kienle, V. Duppel, A. Lotnyk, D. Sturm, M. Heilmaier, G. J. Schmitz, A. Drevermann, F. Stein, M. Palm Published online by Cambridge University Press: 04 February 2011, mrsf10-1295-n03-09 Al-rich Ti-Al alloys attracted some attention during the past years due to the possibility of their application as light-weight, high-performance materials at elevated temperatures. The effect of the addition of Nb to Al-rich Ti-Al alloys has been studied for Ti36 Al62 Nb2 by a combined approach of transmission electron microscopy (TEM) techniques for unraveling the structure and composition at the nanoscale. Structural analyses on as-cast ternary alloys revealed the presence of h-TiAl2-, Ti3Al5- and γ-TiAl-type phases. After heat treatment, phase transformations like the replacement of the metastable h-TiAl2-type by the stable r-TiAl2-type were identified. Additionally, changes of the microstructural features like the formation of interfaces with different orientation relationships are apparent. The orientation and interfacial relationships involved are compared to those of binary Ti-Al alloys rich in Al. The Effect of Increased Energy Prices on Agriculture: A Differential Supply Approach Charles B. Moss, Grigorios Livanis, Andrew Schmitz Journal: Journal of Agricultural and Applied Economics / Volume 42 / Issue 4 / November 2010 Print publication: November 2010 The increase in energy prices between 2004 and 2007 has several potential consequences for aggregate agriculture in the U.S. We estimate the derived input demand elasticities for energy as well as capital, labor, and materials using the differential supply formulation. Given that the derived input demand for energy is inelastic, it is more price-responsive than the other inputs. The results also indicate that the U.S. aggregate agricultural supply function is responsive to energy prices. White-matter markers for psychosis in a prospective ultra-high-risk cohort O. J. N. Bloemen, M. B. de Koning, N. Schmitz, D. H. Nieman, H. E. Becker, L. de Haan, P. Dingemans, D. H. Linszen, T. A. M. J. van Amelsvoort Journal: Psychological Medicine / Volume 40 / Issue 8 / August 2010 Published online by Cambridge University Press: 09 November 2009, pp. 1297-1304 Subjects at 'ultra high risk' (UHR) for developing psychosis have differences in white matter (WM) compared with healthy controls. WM integrity has not yet been investigated in UHR subjects in relation to the development of subsequent psychosis. Hence, we investigated a prospective cohort of UHR subjects comparing whole brain fractional anisotropy (FA) of those later developing psychosis (UHR-P) to those who did not (UHR-NP). We recruited 37 subjects fulfilling UHR criteria and 10 healthy controls. Baseline 3 Tesla magnetic resonance imaging (MRI) scans and Positive and Negative Syndrome Scale (PANSS) ratings were obtained. UHR subjects were assessed at 9, 18 and 24 months for development of frank psychosis. We compared baseline FA of UHR-P to controls and UHR-NP subjects. Furthermore, we related clinical data to MRI outcome in the patient population. Of the 37 UHR subjects, 10 had transition to psychosis. UHR-P subjects showed significantly lower FA values than control subjects in medial frontal lobes bilaterally. UHR-P subjects had lower FA values than UHR-NP subjects, lateral to the right putamen and in the left superior temporal lobe. UHR-P subjects showed higher FA values, compared with UHR-NP, in the left medial temporal lobe. In UHR-P, positive PANSS negatively correlated to FA in the left middle temporal lobe. In the total UHR group positive PANSS negatively correlated to FA in the right superior temporal lobe. UHR subjects who later develop psychosis have differences in WM integrity, compared with UHR subjects who do not develop psychosis and to healthy controls, in brain areas associated with schizophrenia. F-54 Stardust Cometary Matter Analyzed By Synchrotron Nano-XRF: New Results and Developments T. Schoonjans, B. Vekemans, G. Silversmit, L. Vincze, S. Schmitz, F. Brenker Journal: Powder Diffraction / Volume 24 / Issue 2 / June 2009 Published online by Cambridge University Press: 20 May 2016, p. 171 F-61 Polycapillary Based Confocal Detection Schemes for XRF Micro and Nano-Spectroscopy B. Vekemans, B. De Samber, T. Schoonjans, G. Silversmit, L. Vincze, R. Evens, K. De Schamphelaere, C. R. Janssen, B. Masschaele, L. Van Hoorebeeke, S. Schmitz, F. Brenker, R. Tucoulou, P. Cloetens, M. Burghammer, J. Susini, C. Riekel The NStED Stellar and Exoplanet Hosting Star Service S. Ramirez, B. Ali, R. Baker, G. B. Berriman, K. von Braun, N-M. Chiu, D. R. Ciardi, J. Good, S. R. Kane, A. C. Laity, D. L. McElroy, S. Monkewitz, A. N. Payne, M. Schmitz, J. R. Stauffer, P. L. Wyatt, A. Zhang Journal: Proceedings of the International Astronomical Union / Volume 4 / Issue S253 / May 2008 Print publication: May 2008 The NASA Star and Exoplanet Database (NStED) is a general purpose stellar archive with the aim of providing support for NASA's planet finding and characterization goals, stellar astrophysics, and the planning of NASA and other space missions. There are two principal components of NStED: a database of (currently) 140,000 nearby stars and exoplanet-hosting stars, and an archive dedicated to high precision photometric surveys for transiting exoplanets. We present a summary of the NStED stellar database, functionality, tools, and user interface. NStED currently serves the following kinds of data for 140,000 stars (where available): coordinates, multiplicity, proper motion, parallax, spectral type, multiband photometry, radial velocity, metallicity, chromospheric and coronal activity index, and rotation velocity/period. Furthermore, the following derived quantities are given wherever possible: distance, effective temperature, mass, radius, luminosity, space motions, and physical/angular dimensions of habitable zone. Queries to NStED can be made using constraints on any combination of the above parameters. In addition, NStED provides tools to derive specific inferred quantities for the stars in the database, cross-referenced with available extra-solar planetary data for those host stars. NStED can be accessed at http://nsted.ipac.caltech.edu. The NStED Exoplanet Transit Survey Service K. von Braun, M. Abajian, B. Ali, R. Baker, G. B. Berriman, N-M. Chiu, D. R. Ciardi, J. Good, S. R. Kane, A. C. Laity, D. L. McElroy, S. Monkewitz, A. N. Payne, S. Ramirez, M. Schmitz, J. R. Stauffer, P. L. Wyatt, A. Zhang The NASA Star and Exoplanet Database (NStED) is a general purpose stellar archive with the aim of providing support for NASA's planet finding and characterization goals, stellar astrophysics, and the planning of NASA and other space missions. There are two principal components of NStED: a database of (currently) 140,000 nearby stars and exoplanet-hosting stars, and an archive dedicated to high-precision photometric surveys for transiting exoplanets. We present a summary of the latter component: the NStED Exoplanet Transit Survey Service (NStED-ETSS), along with its content, functionality, tools, and user interface. NStED-ETSS currently serves data from the TrES Survey of the Kepler Field as well as dedicated photometric surveys of four stellar clusters. NStED-ETSS aims to serve both the surveys and the broader astronomical community by archiving these data and making them available in a homogeneous format. Examples of usability of ETSS include investigation of any time-variable phenomena in data sets not studied by the original survey team, application of different techniques or algorithms for planet transit detections, combination of data from different surveys for given objects, statistical studies, etc. NStED-ETSS can be accessed at http://nsted.ipac.caltech.edu. COMMISSION 5: DOCUMENTATION AND ASTRONOMICAL DATA Françoise Genova, Raymond P. Norris, Olga B. Dluzhnevskaya, Michael S. Bessel, H. Jenker, Oleg Yu. Malkov, Fionn Murtagh, Koichi Nakajima, François Ochsenbein, William D. Pence, Marion Schmitz, Roland Wielen, Yong Heng Zhao Journal: Proceedings of the International Astronomical Union / Volume 3 / Issue T26B / December 2007 Published online by Cambridge University Press: 18 November 2008, pp. 212-216 Print publication: December 2007 Commission 5 has been very active during the IAU XXVI General Assembly in Prague: the Commission, its Working Groups and its Task Force held business meetings. In addition, Commission 5 sponsored two Special Sessions: Special Session 3 on The Virtual Observatory in Action: New Science, New Technology, and Next Generation Facilities which was held for three days 17–22 August, and Special Session 6 on Astronomical Data Management, which was held on 22 August. Commission 5 also participated in the organisation of Joint Discussion 16 on Nomenclature, Precession and New Models in Fundamental Astronomy, which was held 22-23 August. The General Assembly and Commission 5 web sites provides links to detailed information about all these meetings. Hidden parameters in the plasma deposition of microcrystalline silicon solar cells M.N. van den Donker, B. Rech, R. Schmitz, J. Klomfass, G. Dingemans, F. Finger, L. Houben, W.M.M. Kessels, M.C.M. van de Sanden Journal: Journal of Materials Research / Volume 22 / Issue 7 / July 2007 Published online by Cambridge University Press: 31 January 2011, pp. 1767-1774 The effect of process parameters on the plasma deposition of μc-Si:H solar cells is reviewed in this article. Several in situ diagnostics are presented, which can be used to study the process stability as an additional parameter in the deposition process. The diagnostics were used to investigate the stability of the substrate temperature during deposition at elevated power and the gas composition during deposition at decreased hydrogen dilution. Based on these investigations, an updated view on the role of the process parameters of plasma power, heater temperature, total gas flow rate, and hydrogen dilution is presented. Post-resuscitation haemodynamics in a novel acute myocardial infarction cardiac arrest model in the pig T. Palmaers, S. Albrecht, C. Leuthold, F. Heuser, J. Schuettler, B. Schmitz Journal: European Journal of Anaesthesiology / Volume 24 / Issue 7 / July 2007 Although a considerable amount of promising experimental research has been performed on cardiopulmonary resuscitation, clinical data indicate an ongoing limited outcome in human beings. One reason for this discrepancy could be that experimental studies use healthy animals whereas most human beings undergoing cardiopulmonary resuscitation suffer from acute or chronic myocardial dysfunction. To overcome this problem, we sought to develop a new model of myocardial infarction, that is easy to perform in all kind of laboratories and compromises on the myocardial function significantly. Following approval by the local authorities, 14 domestic pigs were instrumented for measurement of arterial, central venous, left atrial and left ventricular pressures. Myocardial infarction was induced in eight pigs by clipping the circumflex artery close to its origin from the left coronary artery (infarction group; n = 8). Six animals (no infarction group, n = 6) served as no-infarct controls. Following a 4-min period of cardiac arrest, internal cardiac massage was performed in these two groups, and haemodynamics were recorded during the first 30 min of reperfusion. All animals were resuscitated successfully. Compared to the no-infarction group, the infarction group showed significantly decreased myocardial contractility, coronary perfusion pressure and cardiac index (30 min after restoration of spontaneous circulation: infarction group: 57 ± 7 and 89 ± 19 mL min−1 kg−1 in the no-infarction group; mean ± SD; P < 0.05) during reperfusion. Two animals from the infarction group (25%), but none of the animals in the no-infarction group, died during the reperfusion period. These data demonstrate that clipping of the circumflex artery leads to a reduced myocardial performance after successful resuscitation, whereas the rate of restoration of spontaneous circulation is not reduced. Therefore, this set-up provides a reproducible model for future studies of post-resuscitation haemodynamics and treatment.
CommonCrawl
The impact of regular school closure on seasonal influenza epidemics: a data-driven spatial transmission model for Belgium Giancarlo De Luca1, Kim Van Kerckhove2, Pietro Coletti2, Chiara Poletto1, Nathalie Bossuyt3, Niel Hens2,4 & Vittoria Colizza ORCID: orcid.org/0000-0002-2113-23741,5 BMC Infectious Diseases volume 18, Article number: 29 (2018) Cite this article School closure is often considered as an option to mitigate influenza epidemics because of its potential to reduce transmission in children and then in the community. The policy is still however highly debated because of controversial evidence. Moreover, the specific mechanisms leading to mitigation are not clearly identified. We introduced a stochastic spatial age-specific metapopulation model to assess the role of holiday-associated behavioral changes and how they affect seasonal influenza dynamics. The model is applied to Belgium, parameterized with country-specific data on social mixing and travel, and calibrated to the 2008/2009 influenza season. It includes behavioral changes occurring during weekend vs. weekday, and holiday vs. school-term. Several experimental scenarios are explored to identify the relevant social and behavioral mechanisms. Stochastic numerical simulations show that holidays considerably delay the peak of the season and mitigate its impact. Changes in mixing patterns are responsible for the observed effects, whereas changes in travel behavior do not alter the epidemic. Weekends are important in slowing down the season by periodically dampening transmission. Christmas holidays have the largest impact on the epidemic, however later school breaks may help in reducing the epidemic size, stressing the importance of considering the full calendar. An extension of the Christmas holiday of 1 week may further mitigate the epidemic. Changes in the way individuals establish contacts during holidays are the key ingredient explaining the mitigating effect of regular school closure. Our findings highlight the need to quantify these changes in different demographic and epidemic contexts in order to provide accurate and reliable evaluations of closure effectiveness. They also suggest strategic policies in the distribution of holiday periods to minimize the epidemic impact. Children represent an epidemiological group of central importance for the transmission of influenza [1–3]. They often have a larger vulnerability to infections because of limited prior immunity, and they mix at school with high contact rates [4] thus representing key drivers for influenza spread. The closure of school has been associated to the potential of reducing influenza propagation in the community by breaking important chains of transmission. It is expected to potentially delay the peak, and reduce the epidemic impact, at peak time and of the overall wave. Though not specifically recommended by the World Health Organization during the 2009 H1N1 pandemic, it is envisioned as a possible non-pharmaceutical intervention for pandemic mitigation left to the decision of national and local authorities [5, 6]. A large body of literature exists on the topic, however contrasting evidence lead to no definitive emerging consensus [7, 8]. Benefits and limitations appear to depend on the specific epidemic context. For example, influenza epidemics characterized by a larger attack rate in children compared to adults are expected to be more sensitive to the closure of schools [9]. This experience was reported in many countries during the 2009 H1N1 influenza pandemic, where closed schools coincided with a marked reduction of influenza activity [10–22]. School closure interventions are often considered along with other mitigation strategies, as it happened with social distancing in Mexico following the pandemic outbreak [15, 23], making their effect more difficult to isolate. Studies generally report a slowing down effect in the incidence during closure, however in some cases the effect may be mixed with the natural decline of the epidemic, because of late implementation [8]. In addition, no clear trend is observed in the impact of school closure on the epidemic burden depending on the time of closure – before, around, or after the peak [8]. Given its potentially important role in reducing the epidemic impact, school closure has more often been investigated in the realm of pandemics compared to seasonal influenza [8]. For the latter, regular school closure during holidays in temperate regions has been considered as a natural example to evaluate the impact of school closure [24–26]. Directly extending the application to pandemic situations has however important limitations. Closure associated to winter holidays is regularly scheduled in the school calendar, whereas school closure as an intervention corresponds to an unplanned interruption of school attendance that may take different forms (e.g. proactive vs. reactive, at the national or local level, with a gradual closure of classes or of the entire school) [7, 27–29]. In addition to the different nature of closure, also its duration changes from a fixed scheduled period of holidays to one of variable extension depending on the ongoing epidemic and resulting outcome. How individual behavior changes in all these conditions is the critical aspect to quantify in order to accurately assess the impact of school closure on transmission. Transmission models fitting the 2009 H1N1 pandemic or parameterized to a similar pandemic scenario have been used to assess the value of school closure or summer holidays [17, 20, 28–30]. Few of them are based on estimates for social mixing changes [20, 30], as data collected during a pandemic are limited [30, 31], leaving other approaches to rely on assumptions about contacts that may critically affect the studies' findings. Applications to seasonal influenza may on the other hand count on a more accurate description of population mixing. Surveys conducted over the calendar year to measure variations of mixing patterns [32–35] offer indeed the opportunity to perform data-driven modeling studies that mechanistically assess the role of school holidays on seasonal influenza. Interestingly, they also highlighted considerably large differences across countries in the way contacts change from term-time to school holidays [32], suggesting the need for country-specific estimates to accurately and reliably parameterize models. Changes in mobility is another important aspect that is rarely integrated in school closure studies. Travel is known to be responsible for the spatial dissemination of influenza [20, 36–44]. In addition to extraordinary travel drops in reaction to epidemics [44–46], mobility changes regularly occur during school holidays compared to term-time [47, 48]. Moreover, important differences were highlighted in the mobility of children vs. adults and their associated variations, so that their coupling with social mixing changes occurring during holidays may have a considerable impact on the epidemic outcome [20, 48]. Our aim is to explicitly integrate social mixing and travel from data into a modeling framework to assess how variations induced by regular school closure may impact seasonal influenza epidemics. Three modeling studies were developed so far with similar objectives. Towers and Chowell studied the impact of day-of-week variations in human social contact patterns on incidence data collected at a large hospital in Santiago, Chile, during 2009 H1N1pdm [49]. Their approach was not spatial, therefore did not consider mobility changes, and mainly focused on the sensitivity of influenza incidence variations to the latency period. Apolloni et al. used a stylized analytical approach to evaluate the role of age-dependent social mixing and travel behavior on the conditions for epidemic spatial invasion [20]. The model can compare different contexts, with or without schools in terms, and also account for associated changes. The contexts are however considered independently (no full school calendar can be considered) and the epidemic impact is evaluated only in terms of conditions for spatial dissemination. Going beyond these limitations, more recently Ewing et al. introduced an age-specific spatial metapopulation model to evaluate how behavioral changes associated to winter holiday impact the flu season [48]. The model is applied to the United States and it integrates data on travel behavior, whereas mixing is assumed from estimates available from Europe and adapted to summer holiday changes measured in the UK during the 2009 pandemic [30]. Their findings identify changes in mixing patterns as the key element responsible for the epidemic effects induced by holidays. Given the central role of mixing patterns largely supported by evidence [7, 8, 20, 48], the heterogeneous country-specific contact variations measured in Europe [4, 32], and the marked difference expected in individual behavior during a seasonal flu epidemic vs. a pandemic, here we extend prior approaches to introduce a data-driven spatially explicit model fully parameterized on Belgium. The aim is to reduce assumptions in favor of input data, and to exclusively focus on seasonal influenza and associated parameterization. Contact data associated to four types of calendar days are considered, belonging to Regular Weekday, Regular Weekend, Holiday Weekday, Holiday Weekend (here 'regular' refers to non-holiday period), allowing us to assess the role of weekends in addition to holidays. A richer calendar with additional holidays beyond Christmas break is also considered. Confirming and extending prior results with a different modeling approach and input data would greatly support our understanding of the role of changes in mixing and travel on influenza epidemics. In order to study the role of changes in contact patterns and in travel behavior along the calendar, we considered a mathematical approach for the spatial transmission of influenza in Belgium. We built a discrete stochastic age-specific spatial metapopulation model at the municipality level, based on demographic, mixing, and mobility data of Belgium. We parameterized it with influenza-like-illness (ILI) data reported by the Belgian Scientific Institute of Public Health at the district level for the 2008/2009 season. By using the Belgian school calendar for that season, we assessed the impact of the individual changes in mixing and travel behavior during regular school closure, given by available data. We then tested experimental scenarios to identify the mechanisms responsible for the observed epidemic outcomes. Here, we describe in detail the mathematical model, input data, calibration procedure and experimental scenarios. Age-specific metapopulation model Metapopulation epidemic models are used to describe the spatial spread of an infectious disease through a spatially structured host population [50–53]. They are composed of patches or subpopulations of the system, connected through a coupling process generally describing hosts mobility. Here we consider the population to be divided into two age classes, children and adults, based on the modeling framework introduced by Apolloni et al. [20]. Infection dynamics occur inside each patch, driven by the contacts between and within these two classes, and spatial spread occurs via the mobility of individuals (Fig. 1). Both processes are modeled explicitly with a discrete and stochastic approach. The model is based on Belgian data and follows the time evolution of the 2008/2009 school calendar. It includes 589 patches corresponding to the 589 municipalities (nl. gemeenten, fr. communes) of Belgium. Weekends and school holidays are explicitly considered, and variations in mixing and travel behavior are accounted for in the model and based on data. In the following, we describe in detail the various components of the model. Schematic illustration of the spatial age-structured metapopulation model. The metapopulation modeling scheme is composed of three layers. At the country scale, Belgium is modeled as a set of patches (here indicated with q and p) corresponding to municipalities coupled through mobility of individuals f pq (i) of age class i at time t. Within each municipality, population is divided into two age classes, children (c) and adults (a), whose mixing pattern is defined by the contact matrix C. Individuals resident of patch p and individuals commuting to that patch (e.g. resident of patch q) mix together following commuting. The figure reports as an example the contact matrix of a regular weekday (Eq. (3)). Mobility and mixing vary based on the calendar day (regular/holiday, weekday/weekend). Influenza disease progression at the individual level is modeled through a Susceptible-Exposed-Infectious-Recovered compartmental scheme, with β indicating the per-contact transmission rate, ε the rate from exposed to infectious state, μ the recovery rate Demography and social mixing Individuals are divided into children (c, age less than 19y) and adults (a, otherwise). Population size and age structure per municipality as of January 1, 2008 are obtained from Belgian Statistics [54]. Social mixing between the two age groups is quantified by contact matrices extracted from the data obtained through a Belgian social contact survey [4, 32]: $$ \pmb{C}=\left (\begin{array}{cc} C_{cc} & C_{ca}\\ C_{ac} & C_{aa}\\ \end{array} \right) $$ where the element ij (i=a,c, j=a,c) is given by: $$ C_{ij}=\frac{M_{ij}}{N_{j}}N_{tot}\, {,} $$ with M ij the average number of contacts made by survey participants in age class i with individuals in age class j, N j the population of age class j, N tot the total population of Belgium. C is defined at the national level, and here we assume that it is the same throughout the country, with the number of contacts being altered exclusively by the patch demography (see Additional file 1: Section 1 for more details). From survey data, we have that the contact matrix for a regular weekday is: $$ \pmb{C^{\text{reg weekday}}}= \left (\begin{array}{cc} C_{cc} & C_{ca}\\ C_{ac} & C_{aa}\\ \end{array} \right) = \left (\begin{array}{cc} 40.71 & 7.84\\ 7.84 & 14.25\\ \end{array} \right)\,. $$ The variations for the other day types are discussed in paragraph "Changes in social mixing and travel behavior during school closure". Infection dynamics Influenza disease progression is described through a Susceptible-Exposed-Infectious-Recovered (SEIR) model (Fig. 1) [55, 56]. A susceptible individual can contract the disease with a per-contact transmissibility rate β from infectious individuals, then entering the exposed or latent class. After an average latency period of ε−1=1.1 days [57, 58], the individual becomes infectious for an average duration μ−1=3 days [57, 58] and can transmit the infection, before recovering and becoming immune to the disease. A fraction of children (g c ) and adults (g a ) were considered immune to the disease at the beginning of the influenza epidemic, based on available knowledge on prior immunity and vaccination coverage in the country for the 2008/2009 influenza season and prior seasons (g c =39.87%, g a =53.19%) [59–61]. The force of infection for a susceptible individual of age class i (i=a,c) in a given patch p is given by: $$ \lambda(i,p,t)=\beta\sum_{j} C_{ij}(t) \frac{I^{p}_{j}(t)}{N^{p}(t)}, $$ where j runs on age classes, \(I^{p}_{j}(t)\) and Np(t) count the total number of infectious individuals of age class j and the total population size of patch p at time t, respectively. The force of infection changes with time for two reasons. The first is the school calendar, distinguishing between regular weekdays, regular weekends, holiday weekdays, holidays weekends, and accounted for by C ij (t). The second is the mobility of individuals. At a given day of the simulation, each patch p may include: non-commuting residents of p, commuters from neighboring patches for school/work, commuting residents of p after school/work (see paragraph "Mobility" and Additional file 1: Section 1). Coupling from patch p to patch q is given by the mobility of age class i, i.e. f pq (i) (Fig. 1). We considered commuting data across patches from the 2001 Socio Economic Survey of the Belgian Census [62] to describe the regular mobility of individuals for school/work during a regular weekday. Data are not age-specific, so we extracted the commuting fluxes per age class based on the probability of children (adults) of commuting on a given distance computed on the French commuting data [63] (see Additional file 1: Section 1). Such inference was based on the assumption of a similar mobility behavior across the two neighboring countries. Air travel was not considered due to negligible internal air traffic within the country. Changes in social mixing and travel behavior during school closure Changes in social mixing are based on the data of the Belgian social contact survey [4, 32], where participants were asked to report their number of contacts during a regular weekday, a regular weekend, a holiday weekday or a holiday weekend. In addition to the contact matrix of Eq. (3), we have: $$ \pmb{C^{\text{reg weekend}}}= \left (\begin{array}{cc} 12.51 & 6.00\\ 6.00 & 10.85\\ \end{array} \right)\,\, \text{for a regular weekend}, $$ $$ \pmb{C^{\text{hol weekday}}}= \left (\begin{array}{cc} 14.02 & 7.28\\ 7.28 & 12.29\\ \end{array} \right)\,\, \text{for a holiday weekday}, $$ $$ \pmb{C^{\text{hol weekdend}}}= \left (\begin{array}{cc} 10.89 & 7.20\\ 7.20 & 8.59\\ \end{array} \right)\,\, \text{for a holiday weekend}. $$ Concerning variations in mobility, schools are closed during weekends and holidays, so no commuting exists for children in those days. We considered adults to continue commuting during holiday weekdays, assuming that adults' time off of work would be homogeneously distributed throughout the year. Concerning adult mobility during the weekends, we estimated the travel fluxes reductions based on statistics available for France [64], based on the same assumptions explained in paragraph "Mobility". The resulting age-specific reductions for mobility are defined in Table 1. Table 1 Age-specific mobility reductions Numerical simulations Time is discretized considering a time step of dt=0.5 days, with one time-step corresponding to the activities performed during a workday (i.e. commuting, social mixing), followed by a time step corresponding to the activities performed out of that timeframe (i.e. social mixing), as typically done in agent-based epidemic models [17]. Influenza transmission within each patch is modeled with binomial processes (see Additional file 1: Section 1). Starting from the initial conditions set by influenza-like-illness surveillance data, we performed 2·103 stochastic runs for each model under study. Additional numerical details are provided in the Additional file 1: Section 1. Belgian school calendar for 2008/2009 Our model was based on the official Belgian school calendar for school year 2008/2009. Classes in Belgium are in session from Monday to Friday, and schools are closed during the weekends. The calendar included the following holidays for the 2008/2009 academic year during which schools were closed: Fall holiday: from October 25 to November 2, including the public holiday of the first and second of November; Public holiday of November 11; Christmas holiday: from December 20, 2008 to January 4, 2009; Winter holiday: from February 21 to March 1; Easter holiday: from April 4 to April 17; Long weekend: from May 1 to May 3, around the public holiday of May 1; Long weekend: from May 21 to May 24, around the public holiday of May 21; Long weekend: from May 30 to June 1, around the public holiday of June 1. From July 1 to August 31 schools are closed for the summer holidays. Influenza surveillance data for 2008/2009 season We used influenza surveillance data collected by the Belgian Scientific Institute of Public Health [65]. Data report new ILI episodes registered each week by the network of sentinel general practitioners (GP). ILI is defined as sudden onset of symptoms, high fever, respiratory symptoms (cough, sore throat) and systemic symptoms (headache, muscular pain). For every episode, additional information is reported: age group (< 5, 5-14, 15-64, 65-84, 85+), hospitalization, antiviral treatment, vaccination status, municipality of residence. The use of ILI surveillance data to approximate influenza incidence is a usual practice [66–68] and in the case of Belgium previous work showed good agreement of ILI data with virological data [69] and robustness across different surveillance systems [70]. Surveillance data on the new number of cases were aggregated at the district level (including several municipalities, see Table S1 of the Additional file 1) to reduce signal noise. The metapopulation model was calibrated to the 2008/2009 influenza season. Though the simulated dynamics is spatially explicit, calibration was performed on Brussels district only, i.e. by comparing the simulated incidence profile of Brussels to the incidence ILI data for that district. We did not consider calibrating the model also in the remaining districts, as these were used for validation. The model was seeded with the first non-zero incidence value provided by surveillance data per district and accounted for possible sampling biases. We used a bootstap/particle filter Weighted Least Square (WLS) with 20 particles to calibrate our model fixing the epidemiological parameters described in paragraph "Infection dynamics" and obtain the per-contact transmissibility β. Calibration was performed on normalized incidence curves to discount effects due to unknown GP consultation rates. Additional details can be found in the Additional file 1: Section 2. Experimental scenario design To assess the impact of variations in contacts and mobility due to school closures, we compared the realistic model based on the Belgian school calendar and integrating the changes described in paragraph "Changes in social mixing and travel behavior during school closure" with a set of experimental scenarios that we describe here. To estimate the relative importance of variations in social mixing vs. variations in travel behavior, we considered: The travel changes model, where only variations in mobility occurring during weekends and holidays are considered, whereas social mixing is fixed as on a regular weekday; The mixing changes model, where only variations in social mixing occurring during weekends and holidays are considered, whereas travel is fixed as on a regular weekday; The regular weekday model, where no variations are considered, and social mixing and travel behavior are fixed as in a regular weekday. To assess the role of each school holiday period, we considered scenarios where each period was removed, one at a time: The w/o Fall holiday model, where the holiday period from October 25 to November 2 was removed; The w/o Christmas holiday model, where the holiday period from December 20, 2008 to January 4, 2009 was removed; The w/o Winter holiday model, where the holiday period from February 21 to March 1 was removed; The w/o Easter holiday model, where the holiday period from April 4 to April 17 was removed. In all cases, the holiday period is substituted by the regular course of the week, with regular weekdays and regular weekends. In addition, we tested the w/o holiday model, where all holiday periods of the calendar are removed, and only the week structure is kept. We also considered a synthetic scenario where we extended the Christmas holiday of one week, before the start of the break, or after its end, referred to as the Christmas holiday extension models. To assess the interplay between the timing of the epidemic and that of the holiday periods, we considered anticipation and delays of the start of the epidemic season, as follows: The 4 weeks anticipation model (− 4w), where the start of the simulated influenza epidemic is anticipated 4 weeks prior to the start of the realistic model calibrated on the empirical data; The 2 weeks anticipation model (− 2w), as above with an anticipation of 2 weeks; The 2 weeks delay model (+ 2w), as above with a delay of 2 weeks; The 4 weeks delay model (+ 4w), as above with a delay of 4 weeks. In all these cases, the start of the epidemic is the only aspect that is being altered, whereas the school calendar (and associated variations in social mixing and travel behavior) remains fixed. We analyzed the spatial distribution of the force of infection determined by the demographic profile in space. To do so, we studied the distribution of the patch reproductive number Rp that can be calculated as the largest eigenvalue of the next-generation matrix \(\pmb {K}^{p}=\frac {\beta }{\mu } (1-g_{i}) \frac {N_{i}^{p}}{N^{p}} C_{ij}(t)\) (see Additional file 1: Subsection 1.5) [71, 72]. This is done for the four types of days considered in terms of their variations of social mixing, namely regular weekday, regular weekend, holiday weekday, and holiday weekend. Validation of the model is performed by comparing the simulated incidence profiles to the empirical surveillance data at the national and at the district level. In particular, we looked at the peak difference per district d per stochastic run r: $$ \Delta T^{d}(r)=T^{d}(r)-T^{d}_{ILI} $$ where Td(r) is the peak time of weekly incidence of run r in district d and \(T^{d}_{ILI}\) is the incidence peak reported from surveillance data in the same district. Medians per patch over 2·103 stochastic runs are computed. Scenario analyses are performed in order to assess the difference of an experimental scenario with the realistic model. We quantified the various comparisons in terms of: The peak time difference per patch \(\Delta T^{p}= T^{p}_{scenario} - T^{p}_{realistic\,\, \text {model}}\), with Tp the median peak time of the incidence curve in patch p computed on all stochastic runs (for both the scenario under study and the realistic model); The peak incidence relative variation per patch \(\Delta I^{p}= \left (I^{p}_{scenario} - I^{p}_{realistic\, \text {model}}\right)/I^{p}_{realistic\,\text {model}}\), with Ip the median incidence value at peak time in patch p; The epidemic size relative variation per patch \(\Delta \sigma ^{p}= \left (\sigma ^{p}_{scenario} - \sigma ^{p}_{realistic\,\text {model}}\right)/\sigma ^{p}_{realistic\, \text {model}}\), with σp the median epidemic size in patch p. Medians and 50%, 95% confidence intervals at the patch levels are computed for synthesis. In addition, medians, 50%, 95% confidence intervals of the simulated incidence are also calculated at the national level across the tested scenarios. Reproducing the empirical influenza spreading pattern Season 2008/2009 shows an ILI incidence that reaches its peak in week 5 of 2009, both in Brussels district and at the national level. The incidence is visibly slowed down during Christmas holiday (Fig. 2), suggesting that holiday periods may have a measurable effect on transmission. Calibration results. (a)-(b): Simulated and empirical incidence curves for the district of Brussels (panel a) and for the entire Belgium (panel b). The incidence curve of Brussels is the sole empirical data used for the calibration of the model. Different vertical axes referring to empirical (black curve, left axis) vs. simulated (red curve, right axis) incidences are used for the sake of comparison of the two curves. Different incidence values are due to unknown GP consultation rates characterising ILI surveillance data. (c)-(f): Probability distribution of the values of the reproductive number Rp computed in each patch following the calibration. They refer to the different day types explored, i.e. belonging to a regular weekday (panel c), regular weekend (d), holiday weekday (e), holiday weekend (f) The simulated peak time is found to be within one week of the empirically observed time for 76% of the districts, and within two weeks for 90% of them (Figure S2 of the Additional file 1). Only two districts in the Province of Luxembourg showed greater discrepancies (four weeks). We observed a mild tendency towards a radial increase of the peak time difference from Brussels to the edge of the country, with 4 weeks difference obtained on the border between Belgium and Luxembourg. The average patch reproductive number is estimated to be R=2.12, corresponding to β=0.0850 ([0.0674,0.0858] 95% confidence interval (CI)) of the per-contact transmissibility obtained from the calibration procedure (see "Methods"). The variation of Rp at the patch level is given by the demographic profile of the population and its immunity profile. In addition, it also depends on the day type considered, whether regular or during a holiday, whether during the week or the weekend (Fig. 2). Larger variations and higher values are obtained for a regular weekday, having the largest number of contacts, compared to less heterogeneous distributions and smaller Rp values in the other cases. The patch reproductive number is lowest for the holiday weekend, corresponding to the lowest mixing. Role of changes in individual behaviors during holidays and weekends: social mixing vs. travel To assess the impact that changes in the social mixing or travel behaviors of individuals have on the epidemic outcome, we tested different experimental scenarios where we independently singled out these aspects. These scenarios are compared to the realistic model calibrated to the 2008/2009 influenza season, defined before, where all behavioral changes associated to the school calendar are considered. Changes in individual behavior induced by weekends and holidays are found to strongly alter the epidemic dynamics leading to a considerable delay of the peak time (median of 3.7 weeks across patches, regular weekday model compared to the realistic model, Fig. 3b) and smaller peak time incidence (33% median relative change, Fig. 3d) and total epidemic size (11% median relative change, Fig. 3c). Role of social mixing vs. travel behavior. (a): Simulated weekly incidence profiles for influenza in Belgium. The realistic model is compared to the travel changes model, the mixing changes model, the regular weekday model. Median curves are shown for all cases, along with 50% confidence intervals (dark shade) and 95% CI (light shade), for the realistic and regular weekday model (they are not shown for the other models for the sake of visualization). (b)-(c)-(d): Peak time difference \({\left (\Delta T^{p}= T^{p}_{scenario} - T^{p}_{realistic\,\, \text {model}}\right)}\), relative variation of epidemic size \({\left (\Delta \sigma ^{p}= \left (\sigma ^{p}_{scenario} - \sigma ^{p}_{realistic\,\text {model}}\right)/ \sigma ^{p}_{realistic\, \text {model}}\right)}\), and relative variation of peak incidence \({\left (\Delta I^{p}= \left (I^{p}_{scenario} - I^{p}_{realistic\, \text {model}}\right)/I^{p}_{realistic\,\text {model}}\right)}\), respectively, across the three experimental scenarios (see "Methods" for more details). Boxplots refer to the distributions across patches Once the variations in individual behavior affecting social mixing or mobility are considered in isolation, social mixing variation during weekends and holidays is found to be mainly responsible for the effects just described. The travel changes model is indeed comparable to the regular weekday model, whereas neglecting changes in mobility (mixing changes model) produces epidemic patterns very similar to the realistic model (zero median variations). Role of distinct school holiday periods and possible holiday extensions The school calendar in Belgium during the influenza season counts four long holiday periods: Fall holiday, Christmas holiday, Winter holiday, Easter holiday (see "Methods"). Cumulatively, all holidays concur to delay the peak time of 1.7 weeks and to reduce the epidemic size of approximately 2%, with a reduction of the peak incidence (4%, all median values across patches, Fig. 4). Among all holiday periods, the largest effect is produced by the Christmas holiday, responsible for the overall reduction of the epidemic size and a peak delay of about 1 week. The early break of Fall holiday has negligible impact instead. Winter holiday leads to a very small reduction of the epidemic size (median of 1%), but no effect on the peak timing or peak incidence. The impact of Easter holiday is negligible on all indicators. By comparing the effect of the regular weekday model (Fig. 3) with the one of the w/o holiday model (Fig. 4), both on the realistic model, we find that weekends have a major effect in slowing down the epidemic curve: a difference of ΔTp=−3.7, [−3.9,−3.6] weeks when no weekends are considered compared to ΔTp=−1.7, [−1.9,−1.2] weeks when they are included. Impact of school holiday periods and holiday extensions. (a)-(b)-(c): Peak time difference, relative variation of epidemic size, and relative variation of peak incidence, respectively, across the following experimental scenarios: w/o Fall holiday model, w/o Christmas holiday model, w/o Winter holiday model, w/o Easter holiday model, w/o holiday model. Boxplots refer to the distributions across patches. (d)-(e)-(f): Peak time difference, relative variation of epidemic size, and relative variation of peak incidence, respectively, for the Christmas holiday extension models, before or after the break. Boxplots refer to the distributions across patches Given the major role of Christmas holiday, we also tested the effect of 1-week extension, before or after the break. The extension before Christmas holidays does not impact the resulting epidemic (Fig. 4, panels d-e-f). If the additional week of holiday is considered after the break, no changes to the epidemic timing are observed, however the incidence at the peak decreases of 4% (median values), respectively. Interplay of epidemic timing and school calendar: early vs. late influenza seasons Christmas holidays are found to be the school closure period with the highest impact on the epidemic outcome, on both its timing and burden, for the 2008/2009 influenza season. Here we assess how this result may vary depending on the timing of the season, by investigating its interplay with the school closure calendar. In order to distinguish between effects induced by the timing of the influenza season only and those related to other season-specific features (e.g. severity of the epidemic, strain circulation, weather, and others), we considered the same epidemic simulated with the realistic model. We explored anticipations and delays of this epidemic of two or four weeks and compared the results with the realistic model. The strongest impact is observed for the earliest epidemic (− 4w model) reporting a median anticipation of more than one week with respect to the realistic model (once discounted for the earlier start) and a median reduction of the peak incidence of about 10% (Fig. 5). All other epidemics are rather similar to the realistic one, except for the − 2w model reporting a considerable reduction of the peak incidence (median of approximately 13% across patches). In addition, it is important to note that, differently from previous effects, the anticipation or delay of the season leads to a considerably larger variation of the simulated epidemic indicators across patches, signaled by the larger confidence intervals reported in Fig. 5. Effect of epidemic timing. (a): Simulated weekly incidence profiles for influenza in Belgium. The realistic model is compared to the scenarios considering the anticipation or delay of the epidemic (− 4w model, − 2w model, + 2w model, + 4w model). Median curves are shown along with 95% CI (light shade). (b)-(c)-(d): Peak time difference, relative variation of epidemic size and relative variation of peak incidence, respectively, across the considered experimental scenarios. Boxplots refer to the distributions across patches. The peak time difference ΔTp discounts the time shift of the initial conditions of the considered model In this study, we considered the impact of regular school closure on the spatio-temporal spreading pattern of seasonal influenza. We focused on the case study of the 2008/2009 influenza season in Belgium. We used a spatial metapopulation model for the transmission of influenza in the country, based on data on contacts and mobility of individuals, and integrating data-driven changes in mixing and travel behavior during weekends and holiday periods. The model calibrated on a single district (i.e. a subset of patches, ∼ 3% of the country total) is able to reproduce with fairly good agreement the empirical pattern observed in the country for that season, suggesting that data-driven mixing and mobility are crucial ingredients to capture influenza spatial dynamics [17, 20, 30, 37, 43, 48, 73–75]. The result is a spatially heterogeneous propagation where the two ingredients act at different levels. Mixing is patch-dependent and determined by the local demography. The large variations observed in the distribution of children vs. adults lead to heterogeneous distributions of the values of the reproductive numbers per patch. In specific mixing conditions – e.g. those of a holiday weekend – a large fraction of patches has Rp≃1, indicating that those locations are found to be close to the critical conditions for epidemic extinction. Influenza is mainly sustained in patches having larger Rp during those periods, and epidemic activity is then transferred to other patches through the mobility of infected individuals. Three-fourth of Belgian districts reach their epidemic peak in the simulations within one week of the empirical peak time. Districts exhibiting greater delays lie on the border of the country. This may be due to the model neglecting the mobility coupling between these regions and the neighboring countries, which is considerably large in some districts (e.g. the flux of individuals of the Luxembourg province commuting abroad represents almost 40% of the total flux of commuters of the district). Our study considered the country to be isolated for the sake of simplicity. We expect this border effect to be increasingly negligible for larger countries. The simulated incidence profile clearly shows a slowing down in the growth of the number of new infections during the Christmas break, as reported by sentinel surveillance in the country, suggesting that holiday is associated to temporary reductions in influenza transmission. This was also found in previous empirical studies [25, 48]. To identify the mechanisms behind this effect, we isolated the changes in mixing and those in travel behavior during school closure, comparing different experimental scenarios, similar to Ewing et al. [48]. We found that mixing changes during weekends and holidays lead to a considerable delay of the epidemic, whereas travel changes would produce no noticeable effect. Moreover, changes in the social contacts would explain the entire difference observed between the realistic model based on the calendar and a model that does not include school closure. This confirms prior modeling findings on winter holidays for several influenza seasons in the United States [48]. In contrast to that work, we found however that an important mitigation of the epidemic impact at peak time also occurs, besides the peak delay. The strong impact of the variation of mixing behavior is easily interpretable in terms of the reduction of the transmission potential expressed by the reproductive numbers per patch. This mainly results from the reduction of the number of contacts between children, as measured by the social contact survey conducted once schools are closed [32]. Travel changes, on the other hand, do not act directly on the transmission potential but affect the coupling force between epidemics in different patches and the opportunity for individuals to be exposed to the disease. For this reason, changes in travel behavior have a smaller effect that is found to be negligible for influenza epidemic spread, as also observed in the work of [48]. This is also consistent with the large literature on travel restrictions showing the little or no effect that these interventions have on pandemic spatial spread [38–40, 44, 46, 73, 76–78]. In addition to holidays, we also found that the closure of school during weekends has a visible effect on the epidemic, periodically dampening transmission, similarly to what observed in [49]. This is generally not reported by influenza surveillance systems, as data are collected on a weekly basis. For the 2008/2009 influenza season we found that Christmas holiday, occurring during a growing phase of influenza activity, is the school break responsible for the largest impact in terms of timing (about 1 week anticipation if holiday is not observed), along with a 5% reduction of the epidemic size. If the school break occurs earlier (as for the Fall holiday) or much later in the influenza season (e.g. Easter holiday), no effect is produced on the resulting epidemic. The case of Winter holidays occurring during the fadeout of the epidemic shows that a small reduction of the total number of cases can still be achieved with school closure after the epidemic peak, whereas other studies showed minimal impact [48]. The analysis on a single season illustrates well how the epidemic impact of school closure depends on the interplay between closure timing and influenza season. By systematically exploring this interplay through synthetic scenarios, we confirm the importance of Christmas holiday in mitigating the influenza epidemic. Most importantly, we found that the break would have the largest impact for a very early season when school closing would occur at or around the epidemic peak. Reduction in transmission due to fewer contacts leads to a strong reduction of the incidence and ultimately of the total epidemic size, as also observed in pandemic settings [8]. In the other synthetic influenza seasons explored, a rebound effect was obtained when schools reopened after the break, most notably for the early season anticipating of two weeks the 2008/2009 influenza epidemic (-2w model). This was previously observed in other contexts [23, 24, 79–84], also when no additional interventions beyond school closure were considered [24, 80]. The various tested scenarios show that Christmas break would have a larger mitigating impact if it occurs before (or around) the peak and when the incidence is about half the peak value or larger. Our investigation shows that the role of holiday timing can be hardly inferred from few examples, and that other breaks beyond Christmas [48] may have an important mitigating impact. Also, the effect of a sequence of holidays occurring in an influenza season cannot be simply derived as a sum of the effects of each holiday period considered separately. Each break indeed affects the epidemic in a different way, altering its subsequent evolution in a non-linear way, so that the full calendar needs to be considered. Our findings help shedding light on previous empirical findings showing no clear pattern for the effects of school closure on peak incidence or total epidemic size, comparing closures before and after the peak [8]. In addition to school breaks already occurring in the calendar, we also explored a possible extension of Christmas holiday of one week. We chose this break as it led to the largest epidemic impact in our case study, and also because it generally occurs before the influenza epidemic peak (this is the case for all influenza seasons from the studied seasons to the current one, pandemic season excluded). As such, we expect it could have a favorable impact on the epidemic outcome in the majority of influenza seasons. Previous work analyzing the length of school closure found that two weeks or more appear to be enough to result in a recognizable effect [14, 24, 25, 80, 84], whereas shorter closures may not be beneficial or may not have an obvious impact [85–89]. Our synthetic results show that the extension would be advantageous only if implemented after the Christmas break, with a mitigation of the peak incidence and a minimal peak delay of a few days. Extensions are generally considered in the realm of reactive closures against a pandemic influenza. Here we decided to test this scenario as a regular closure given that – in a broader context – authorities in Belgium are currently discussing whether to modify the school calendar for pedagogical reasons: the aim would be to reduce summer holidays and redistribute holiday periods throughout the year [90]. We found that an extension of the Christmas holiday would be beneficial in the management of the influenza season potentially mitigating its epidemic impact. Our findings are obtained on seasonal influenza, and results on peak delay were also recovered by modeling works on synthetic influenza pandemics considering reactive school closure [17]. The straightforward extension of our conclusions to the pandemic case faces however several challenges. First, effects induced by school closure may be specific to the specific epidemic profile, and therefore they may lead to different results depending on the pandemic under consideration [7]. For example, beneficial effects of school closure during 2009 H1N1 pandemic may have resulted from the larger magnitude of children attack rates vs. adults. Epidemic contexts more homogeneously impacting age classes may be less affected by school closure. Second, the nature of the school closure may alter the behavior of individuals during that period. In our study we considered holidays that are regularly planned in the school calendar and associated to specific social activities (e.g. vacation trips, family visits and others), for which contact data are available [32]. School closure during an influenza pandemic may be envisioned as a proactive or reactive measure to the ongoing outbreak. Not being planned, it is expected to have a stronger disruptive impact on social mixing of individuals on the short term compared to regular closure. On the other hand, it is argued that prolonged closure may limit the reduction of contacts on the long term, because of costs and logistics, and reduction in compliance rate [7, 25]. Having shown here that changes in social mixing represent the single element critically responsible for the impact of school closure on the epidemic outcome, we note that modeling results on school closure in the case of a pandemic would strongly be affected by assumptions considered for mixing changes, in absence of data. While a large body of literature has recently focused on behavioral changes during an epidemic [91–95], still little is known to quantify them [30–34, 96–99]. Our work focused on Belgium, as a rather detailed survey was conducted in the country to estimate contact rates in the population of different age classes at different periods of the calendar year [32]. These estimates constituted the input data to parameterize our spatial modeling framework. Modeling approaches to study epidemics in settings where no data exist are often based on the assumption that mixing would reduce following school closure and import estimates available from other settings or epidemiological contexts [17, 28, 48]. This may lead to several issues. Contacts and their changes along the calendar may be country-specific [4], thus affecting epidemic results when applied to a different context. Estimates of the overall reduction of the number of contacts during school closure vary widely. Transmission models fitted to epidemic data estimated reductions ranging from 16-18% for holidays during seasonal influenza in France [25], to 25% in Hong Kong for proactive school closure during the 2009 H1N1 pandemic [14], to 30% for the social distancing interventions (including school closure) implemented in Mexico following the start of 2009 pandemic [15]. A large-scale population-based prospective survey in Europe estimated the changes in contact patterns for holiday versus regular period to correspond to a reduction in the reproductive number as high as 33% for some countries, whereas for others no significant decrease was observed [32]. Finally, such overall reduction does not allow to fully parametrize a contact matrix. Such evidence does not support the parameterization of mixing changes from different countries and/or epidemic situations (e.g. seasonal vs. pandemic) [48]. The reduction is expected to be heterogeneous across mixing groups, because of compensatory behaviors (e.g. children drastically reduce children-children contacts but increase children-adults contacts during holidays) [32, 34]. Assumptions on the relative role of specific age classes in absence of data may lead to biases in the modeled epidemic outcome, especially for epidemics reporting large differences in attack rates in children vs. adults. Our work highlights the need to expand our knowledge on contacts and associated changes induced by social activity or by the epidemic itself, in order to better parameterize models and provide reliable and accurate results for epidemic management. Our study has a set of limitations that we discuss in the following. The host population is divided into two classes only. While a larger heterogeneity is known for the distribution of contacts across age classes [4], our approach still accounts for the major role of children vs. adults in the spread of the disease. Moreover, the validation analysis shows that considering children and adults and the associated mixing and travel behavior is enough to reproduce the spatio-temporal unfolding of the epidemic to a good accuracy. Also, we did not distinguish between symptomatic and asymptomatic infections. Santermans et al. [100] investigated the importance of dealing with symptomatic and asymptomatic infections in an epidemic setting based on differences in mixing patterns between ill and healthy (as a proxy for asymptomatic) individuals. Future research should focus on combining the work of [100] with the study outlined here. The study is focused on one season only, the 2008/2009 influenza season. Additional seasons may clearly be included in the analysis, however our choice aimed at discounting season-specific effects to avoid uncertainties and discordance found in previous works. Also, we argue that the main effect behind the observed impact is in the interplay between the incidence profile and holidays timing, all other aspects being equal. To fully assess this aspect, we systematically explored earlier and later epidemics than the 2008/2009 season, thus synthetically accounting for other (similar) influenza seasons. We did not consider age-specific susceptibility, as it was largely addressed for example in studies related to the A(H1N1)v2009 pandemic [101]. It would be interesting to explore its effects in future work in addition to social mixing and mobility, thus investigating additional seasonal influenza profiles. Our experimental scenarios in travel changes and mixing changes models consider neglecting travel changes and mixing changes in an independent way. The two aspects are expected to be intrinsically dependent, however no study has yet quantified this dependency that could inform a better experimental design. Also, we did not take into account uncertainties associated to the social contact rates estimated from the survey data, as previous work showed their limited impact in fitting serological data [102]. Mobility changes from commuting during regular weekdays to non-regular travel during weekends is obtained from travel statistics. We lack however specific data on travel behavior for adults during school holidays. We therefore assumed that adults would continue commuting during holiday weekdays. While we expect that a fraction of adults would stop commuting at least for few days during breaks as they take time off work, we expect this change in travel fluxes (compensated by additional trips to visit families [48]) to have a negligible effect on the simulated epidemic. More drastic changes on travel, i.e. fully neglecting travel changes as in the mixing changes model, indeed did not alter the resulting epidemic. With a data-driven spatial metapopulation model calibrated on the 2008/2009 influenza season in Belgium, we showed that regular school closure considerably slows down influenza epidemics and mitigate their impact on the population, because of changes in social mixing that are empirically measured. This may help the management of epidemics and lessen the pressure on the public health infrastructure. The effect is due to both school holidays and weekend closures, the latter periodically dampening transmission. Variations in travel behavior do not lead instead to visible effects. The observed impact strongly depends on the timing of the school closure, and to a lesser extent on its duration. Christmas holiday is the school break generally playing the most important role in mitigating the epidemic course, though variations are observed depending on the influenza season (e.g. early vs. late epidemic). The addition of one week extension after Christmas holiday may represent an additional strategy to further delay the epidemic peak and mitigate its impact. ILI: Influenza-like-illness SEIR: Susceptible-Exposed-Infectious-Recovered WLS: Weighted least square Longini IM, Koopman JS, Monto aS, Fox JP. Estimating household and community transmission parameters for influenza. Am J Epidemiol. 1982; 115(5):736–51. Viboud C, Boëlle PY, Cauchemez S, Lavenu A, Valleron AJ, Flahault A, Carrat F. Risk factors of influenza transmission in households. Br J Gen Pract. 2004; 54(506):684–9. Baguelin M, Flasche S, Camacho A, Demiris N, Miller E, Edmunds WJ. Assessing optimal target populations for influenza vaccination programmes: An evidence synthesis and modelling study. PLoS Med. 2013; 10(10):1–19. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, Tomba GS, Wallinga J, Heijne J, Sadkowska-Todys M, Rosinska M, Edmunds WJ. Social Contacts and Mixing Patterns Relevant to the Spread of Infectious Diseases. PLoS Med. 2008; 5(3):74. Bell D. Nonpharmaceutical Interventions for Pandemic Influenza, National and Community Measures. Emerg Infect Dis. 2006; 12:88–94. WHO: Reducing transmission of pandemic (H1N1) 2009 in school settings. A framework for national and local planning and response. 2009. http://www.who.int/csr/resources/publications/reducing_transmission_h1n1_2009.pdf. Cauchemez S, Ferguson NM, Wachtel C, Tegnell A, Saour G, Duncan B, Nicoll A. Closure of schools during an influenza pandemic. Lancet Infect Dis. 2009; 9(8):473–81. Jackson C, Vynnycky E, Hawker J, Olowokure B, Mangtani P. School closures and influenza: systematic review of epidemiological studies. BMJ Open. 2013; 3(2):002149. Cauchemez S, Van Kerkhove MD, Archer BN, Cetron M, Cowling BJ, Grove P, Hunt D, Kojouharova M, Kon P, Ungchusak K, Oshitani H, Pugliese A, Rizzo C, Saour G, Sunagawa T, Uzicanin A, Wachtel C, Weisfuse I, Yu H, Nicoll A. School closures during the 2009 influenza pandemic: national and local experiences. BMC Infect Dis. 2014; 14(1):207. Kawaguchi R, Miyazono M, Noda T, Takayama Y, Sasai Y, Iso H. Influenza (h1n1) 2009 outbreak and school closure, osaka prefecture, japan. Emerg Infect Dis. 2009; 15:1685. Paine S, Mercer GN, Kelly PM, Bandaranayake D, Baker MG, Huang QS, Mackereth G, Bissielo A, Glass K, Hope V. Transmissibility of 2009 pandemic influenza A(H1n1) in New Zealand: effective reproduction number and influence of age, ethnicity and importations. Euro Surveill. 2010;15(24). Chao DL, Elizabeth Halloran M, Longini IM. School opening dates predict pandemic influenza a(h1n1) outbreaks in the united states. J Infect Dis. 2010; 202:877–80. Hsueh P-R, Lee P-I, Chiu AW-H, Yen M-Y. Pandemic (h1n1) 2009 vaccination and class suspensions after outbreaks, Taipei city, taiwan. Emerg Infect Dis. 2010; 16:1309–11. Wu JT, Cowling BJ, Lau EHY, Ip DKM, Ho L-M, Tsang T. School closure and mitigation of pandemic (h1n1) 2009, hong kong. Emerg Infect Dis. 2010; 16:538–41. Chowell G, Echevarría-Zuno S, Viboud C, Simonsen L, Tamerius J, Miller MA, Borja-Aburto VH. Characterizing the epidemiology of the 2009 influenza a/h1n1 pandemic in mexico. PLoS Med. 2011; 8(5):1–13. Chowell G, Viboud C, Munayco CV, Gómez J, Simonsen L, Miller MA, Tamerius J, Fiestas V, Halsey ES, Laguna-Torres VA. Spatial and temporal characteristics of the 2009 A/H1n1 influenza pandemic in Peru. PLoS ONE. 2011; 6(6):21287. doi:10.1371/journal.pone.0021287. Merler S, Ajelli M, Pugliese A, Ferguson NM. Determinants of the spatiotemporal dynamics of the 2009 h1n1 pandemic in europe: implications for real-time modelling. PLoS Comput Biol. 2011; 7:1–13. Earn DJD. Effects of school closure on incidence of pandemic influenza in alberta, canada. Ann Intern Med. 2012; 156:173–81. Egger JR, Konty KJ, Wilson E, Karpati A, Matte T, Weiss D. The effect of school dismissal on rates of influenza-like illness in new york city schools during the spring 2009 novel h1n1 outbreak. J Sch Health. 2012; 82:123–30. Apolloni A, Poletto C, Colizza V. Age-specific contacts and travel patterns in the spatial spread of 2009 H1N1 influenza pandemic. BMC Infect Dis. 2013; 13(1):176. Copeland DL, Basurto-Davila R, Chung W, Kurian A, Fishbein DB, Szymanowski P. Effectiveness of a school district closure for pandemic influenza a (h1n1) on acute respiratory illnesses in the community: a natural experiment. Clin Infect Dis. 2013; 56:509–16. Huang KE, Lipsitch M, Shaman J, Goldstein E. The us 2009 a(h1n1) influenza epidemic: quantifying the impact of school openings on the reproductive number. Epidemiol. 2014; 25:203–6. Cruz-Pacheco G, Duran L, Esteva L, Minzoni A, Lopez-Cervantes M, Panayotaros P, Ahued Ortega A, Villasenor Ruiz I. Modelling of the influenza A(H1n1)v outbreak in Mexico City, April-May 2009, with control sanitary measures. Euro Surveill. 2009;14(26). Heymann A, Chodick G, Reichman B, Kokia E, Laufer J. Influence of school closure on the incidence of viral respiratory diseases among children and on health care utilization. Pediatr Infect Dis J. 2004; 23(7):675–7. Cauchemez S, Valleron AJ, Boëlle PY, Flahault A, Ferguson NM. Estimating the impact of school closure on influenza transmission from Sentinel data. Nature. 2008; 452(7188):750–4. Gao H, Wong KK, Zheteyeva Y, Shi J, Uzicanin A, Rainey JJ. Comparing observed with predicted weekly influenza-like illness rates during the winter holiday break, united states, 2004-2013. PLOS ONE. 2015; 10(12):1–11. doi:10.1371/journal.pone.0143791. Gemmetto V, Barrat A, Cattuto C. Mitigation of infectious disease at school: targeted class closure vs school closure. BMC Infect Dis. 2014; 14(1):695. Fumanelli L, Ajelli M, Merler S, Ferguson NM, Cauchemez S. Model-Based Comprehensive Analysis of School Closure Policies for Mitigating Influenza Epidemics and Pandemics. PLoS Comput Biol. 2016; 12(1):1004681. Ciavarella C, Fumanelli L, Merler S, Cattuto C, Ajelli M. School closure policies at municipality level for mitigating influenza spread: a model-based evaluation. BMC Infect Dis. 2016; 16(1):576. Eames KTD, Tilston NL, Brooks-Pollock E, Edmunds WJ. Measured Dynamic Social Contact Patterns Explain the Spread of H1N1v Influenza. PLoS Comput Biol. 2012; 8(3):1002425. Jackson C, Mangtani P, Vynnycky E, Fielding K, Kitching A, Mohamed H, Roche A, Maguire H. School Closures and Student Contact Patterns. Emerg Infect Dis. 2011; 17(2):245–7. Hens N, Ayele G, Goeyvaerts N, Aerts M, Mossong J, Edmunds JW, Beutels P. Estimating the impact of school closure on social mixing behaviour and the transmission of close contact infections in eight European countries. BMC Infect Dis. 2009; 9(1):187. Eames K, Tilston N, White P, Adams E, Edmunds W. The impact of illness and the impact of school closure on social contact patterns. Health Technol Assess. 2010; 14(34):267–312. Eames KTD, Tilston NL, Edmunds WJ. The impact of school holidays on the social mixing patterns of school children. Epidemics. 2011; 3(2):103–8. Béraud G, Kazmercziak S, Beutels P, Levy-Bruhl D, Lenne X, Mielcarek N, Yazdanpanah Y, Boëlle PY, Hens N, Dervaux B. The french connection: The first large population-based contact survey in france relevant for the spread of infectious diseases. PLoS ONE. 2015; 10(7):0133203. doi:10.1371/journal.pone.0133203. Grais RF, Ellis JH, Glass GE. Assessing the impact of airline travel on the geographic spread of pandemic influenza. Eur J Epidemiol. 2003; 18(11):1065–72. Viboud C, Bjørnstad ON, Smith DL, Simonsen L, Miller MA, Grenfell BT. Synchrony, waves, and spatial hierarchies in the spread of influenza. Science. 2006; 312(5772):447–51. Cooper BS, Pitman RJ, Edmunds WJ, Gay NJ. Delaying the International Spread of Pandemic Influenza. PLoS Med. 2006; 3(6):212. Ferguson NM, Cummings DAT, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature. 2006; 442(7101):448–52. Epstein JM, Goedecke DM, Yu F, Morris RJ, Wagener DK, Bobashev GV. Controlling Pandemic Flu: The Value of International Air Travel Restrictions. PLOS ONE. 2007; 2(5):401. Fraser C, Donnelly CA, Cauchemez S, Hanage WP, Kerkhove MDV, Hollingsworth TD, Griffin J, Baggaley RF, Jenkins HE, Lyons EJ, Jombart T, Hinsley WR, Grassly NC, Balloux F, Ghani AC, Ferguson NM, Rambaut A, Pybus OG, Lopez-Gatell H, Alpuche-Aranda CM, Chapela IB, Zavala EP, Guevara DME, Checchi F, Garcia E, Hugonnet S, Roth C, Collaboration TWRPA. Pandemic Potential of a Strain of Influenza A (H1n1): Early Findings. Science. 2009; 324(5934):1557–61. Balcan D, Colizza V, Gonçalves B, Hu H, Ramasco JJ, Vespignani A. Multiscale mobility networks and the spatial spreading of infectious diseases. Proc Natl Acad Sci USA. 2009; 106(51):21484–9. Balcan D, Hu H, Goncalves B, Bajardi P, Poletto C, Ramasco JJ, Paolotti D, Perra N, Tizzoni M, Broeck W, Colizza V, Vespignani A. Seasonal transmission potential and activity peaks of the new influenza A(H1N1): a Monte Carlo likelihood analysis based on human mobility. BMC Med. 2009; 7(1):45. Bajardi P, Poletto C, Ramasco JJ, Tizzoni M, Colizza V, Vespignani A. Human Mobility Networks, Travel Restrictions, and the Global Spread of 2009 H1n1 Pandemic. PLoS ONE. 2011; 6(1):16591. Abdullah ASM, Thomas GN, McGhee SM, Morisky DE. Impact of severe acute respiratory syndrome (SARS) on travel and population mobility: implications for travel medicine practitioners. J Travel Med. 2004; 11(2):107–11. Poletto C, Gomes M, Pastore y Piontti A, Rossi L, Bioglio L, Chao D, Longini I, Halloran M, Colizza V, Vespignani A. Assessing the impact of travel restrictions on international spread of the 2014 West African Ebola epidemic. Eurosurveillance. 2014; 19(42):20936. Kucharski AJ, Conlan AJK, Eames KTD. School's Out: Seasonal Variation in the Movement Patterns of School Children. PLoS ONE. 2015; 10(6):0128070. doi:10.1371/journal.pone.0128070. Ewing A, Lee EC, Viboud C, Bansal S. Contact, travel, and transmission: The impact of winter holidays on influenza dynamics in the United States. J Infect Dis. 2017; 215(5):732–9. Towers S, Chowell G. Impact of weekday social contact patterns on the modeling of influenza transmission, and determination of the influenza latent period. J Theor Biol. 2012; 312:87–95. doi:10.1016/j.jtbi.2012.07.023. Hanski I, Gaggiotti OE. Ecology, Genetics and Evolution of Metapopulations. Waltham: Elsevier Academic Press; 2004. May RM, Anderson RM. Spatial heterogeneity and the design of immunization programs. Math Biosci. 1984; 72(1):83–111. Grenfell B, Harwood J. (Meta)population dynamics of infectious diseases. Trends Ecol Evol. 1997; 12(10):395–9. Keeling MJ, Rohani P. Estimating spatial coupling in epidemiological systems: a mechanistic approach. Ecol Lett. 2002; 5(1):20–9. be.stat. http://statbel.fgov.be/fr/statistiques/opendata/datasets/population/. Anderson RM, May RM. Infectious Diseases of Humans: Dynamics and Control. Oxford: Oxford University Press; 1991, p. 757. Keeling MJ, Rohani P. Modeling Infectious Diseases in Humans and Animals. Princeton and Oxford: Princeton University Press; 2007. Boelle PY, Ansart S, Cori A, Valleron AJ. Transmission parameters of the A/H1N1 (2009) influenza virus pandemic: a review. Influenza Other Respir Viruses. 2011; 5(5):306–16. Carrat F, Vergu E, Ferguson NM, Lemaitre M, Cauchemez S, Leach S, Valleron AJ. Time Lines of Infection and Disease in Human Influenza: A Review of Volunteer Challenge Studies. Am J Epidemiol. 2008; 167(7):775–85. Yang W, Lipsitch M, Shaman J. Inference of seasonal and pandemic influenza transmission dynamics. Proc Natl Acad Sci. 2015; 112(9):2723–8. KCE: Seasonal influenza vaccination: children or other target groups?KCE Reports. 2013; 204:254. Tafforeau J. Enquête de Santé par téléphone 2008. La vaccination. Technical report, WIV-ISP. 2008. https://his.wiv-isp.be/fr/Documents٪20partages/VA_FR_2008.pdf. INS [producer], Directorate General Statistics and Economic Information (DGSEI) [distributor]: Enquête socio-économique générale 2001 (ESEG2001) [electronic files]. available upon request at distributor, Bussels. 2006. http://statbel.fgov.be/fr/statistiques/collecte_donnees/recensement/2001/. INSEE [producer], Centre Maurice Halbwachs (CMH) [distributor]: Récensement de la population 1999 : tableaux mobilités [electronic files]. available upon request at distributor. 1999. http://www.reseau-quetelet.cnrs.fr/. SOeS [producer], Centre Maurice Halbwachs (CMH) [distributor]: Transports et déplacements (ENTD) – 2008 [electronic file]. available upon request at distributor. 2008. http://www.reseau-quetelet.cnrs.fr/. Van Casteren V, Mertens K, Antoine J, Wanyama S, Thomas I, Bossuyt N. Clinical surveillance of the influenza A(H1N1)2009 pandemic through the network of sentinel general practitioners. Arch Public Health. 2010; 68(2):62. Ortiz JR, Zhou H, Shay DK, Neuzil KM, Fowlkes AL, Goss CH. Monitoring influenza activity in the united states: A comparison of traditional surveillance systems with google flu trends. PLOS ONE. 2011; 6(4):1–9. doi:10.1371/journal.pone.0018687. Viboud C, Charu V, Olson D, Ballesteros S, Gog J, Khan F, Grenfell B, Simonsen L. Demonstrating the use of high-volume electronic medical claims data to monitor local and regional influenza activity in the us. PLOS ONE. 2014; 9(7):1–12. doi:10.1371/journal.pone.0102429. Goeyvaerts N, Willem L, Kerckhove KV, Vandendijck Y, Hanquet G, Beutels P, Hens N. Estimating dynamic transmission model parameters for seasonal influenza by fitting to age and season-specific influenza-like illness incidence. Epidemics. 2015; 13(Supplement C):1–9. doi:10.1016/j.epidem.2015.04.002. Bollaerts K, Antoine J, Van Casteren V, Ducoffre G, Hens N, Quoilin S. Contribution of respiratory pathogens to influenza-like illness consultations. Epidemiol Infect. 2013; 141(10):2196–204. doi:10.1017/S0950268812002506. Vandendijck Y, Faes C, Hens N. Eight years of the great influenza survey to monitor influenza-like illness in flanders. PLOS ONE. 2013; 8(5):1–8. doi:10.1371/journal.pone.0064156. Diekmann O, Heesterbeek JAP, Metz JAJ. On the definition and the computation of the basic reproduction ratio R 0 in models for infectious diseases in heterogeneous populations. J Math Biol. 1990; 28(4):365–82. Wallinga J, Teunis P, Kretzschmar M. Using Data on Social Contacts to Estimate Age-specific Transmission Parameters for Respiratory-spread Infectious Agents. Am J Epidemiol. 2006; 164(10):936–44. Brownstein JS, Wolfe CJ, Mandl KD. Empirical Evidence for the Effect of Airline Travel on Inter-Regional Influenza Spread in the United States. PLOS Med. 2006; 3(10):401. Crépey P, Barthélemy M. Detecting robust patterns in the spread of epidemics: a case study of influenza in the United States and France. Am J Epidemiol. 2007; 166(11):1244–51. doi:10.1093/aje/kwm266. Charaudeau S, Pakdaman K, Boëlle PY. Commuter Mobility and the Spread of Infectious Diseases: Application to Influenza in France. PLOS ONE. 2014; 9(1):83002. Hollingsworth TD, Ferguson NM, Anderson RM. Will travel restrictions control the international spread of pandemic influenza?Nat Med. 2006; 12(5):497–9. Colizza V, Barrat A, Barthelemy M, Valleron AJ, Vespignani A. Modeling the Worldwide Spread of Pandemic Influenza: Baseline Case and Containment Interventions. PLoS Med. 2007; 4(1):13. Bell DM. World Health Organization Working Group on International and Community Transmission of SARS: Public health interventions and SARS spread, 2003. Emerging Infect Dis. 2004; 10(11):1900–6. Hatchett RJ, Mecher CE, Lipsitch M. Public health interventions and epidemic intensity during the 1918 influenza pandemic. PNAS. 2007; 104(18):7582–7. doi:10.1073/pnas.0610941104. Accessed 11 Apr 2017 Fujii H, Takahashi H, Ohyama T, Hattori K, Suzuki S. Evaluation of the school health surveillance system for influenza, Tokyo, 1999-2000. Jpn J Infect Dis. 2002; 55(3):97–9. Bootsma MCJ, Ferguson NM. The effect of public health measures on the 1918 influenza pandemic in U,S. cities. PNAS. 2007; 104(18):7588–93. doi:10.1073/pnas.0611071104. Accessed 11 Apr 2017 Baker MG, Wilson N, Huang QS, Paine S, Lopez L, Bandaranayake D, Tobias M, Mason K, Mackereth GF, Jacobs M, Thornley C, Roberts S, McArthur C. Pandemic influenza A(H1n1)v in New Zealand: the experience from April to August 2009. Euro Surveill. 2009;14(34). Birrell PJ, Ketsetzis G, Gay NJ, Cooper BS, Presanis AM, Harris RJ, Charlett A, Zhang XS, White PJ, Pebody RG, Angelis DD. Bayesian modeling to unmask and predict influenza A/H1n1pdm dynamics in London. PNAS. 2011; 108(45):18238–43. doi:10.1073/pnas.1103002108. Accessed 11 Apr 2017 Baguelin M, Hoek AJV, Jit M, Flasche S, White PJ, Edmunds WJ. Vaccination against pandemic influenza A/H1n1v in England: a real-time economic evaluation. Vaccine. 2010; 28(12):2370–84. doi:10.1016/j.vaccine.2010.01.002. Danis K, Fitzgerald M, Connell J, Conlon M, Murphy PG. Lessons from a pre-season influenza outbreak in a day school. Commun Dis Public Health. 2004; 7(3):179–83. Cowling BJ, Lau EHY, Lam CLH, Cheng CKY, Kovar J, Chan KH, Peiris JSM, Leung GM. Effects of school closures, 2008 winter influenza season, Hong Kong. Emerging Infect Dis. 2008; 14(10):1660–2. doi:10.3201/eid1410.080646. Johnson AJ, Moore ZS, Edelson PJ, Kinnane L, Davies M, Shay DK, Balish A, McCarron M, Blanton L, Finelli L, Averhoff F, Bresee J, Engel J, Fiore A. Household responses to school closure resulting from outbreak of influenza B, North Carolina. Emerging Infect Dis. 2008; 14(7):1024–30. doi:10.3201/eid1407.080096. Rodriguez CV, Rietberg K, Baer A, Kwan-Gett T, Duchin J. Association between school closure and subsequent absenteeism during a seasonal influenza epidemic. Epidemiology. 2009; 20(6):787–92. doi:10.1097/EDE.0b013e3181b5f3ec. Calatayud L, Kurkela S, Neave PE, Brock A, Perkins S, Zuckerman M, Sudhanva M, Bermingham A, Ellis J, Pebody R, Catchpole M, Heathcock R, Maguire H. Pandemic (H1n1) 2009 virus outbreak in a school in London, April-May 2009: an observational study. Epidemiol Infect. 2010; 138(2):183–91. doi:10.1017/S0950268809991191. Glorieux I, Vandeweyer J. Gezin en school. De kloof voorbij, de grens gezet? VLOR. 2011. http://www.vlor.be/publicatie/gezin-en-school-de-kloof-voorbij-de-grens-gezet. Funk S, Salathé M, Jansen VAA. Modelling the influence of human behaviour on the spread of infectious diseases: a review. J R Soc Interface. 2010; 7(50):1247–56. Perra N, Balcan D, Gonçalves B, Vespignani A. Towards a Characterization of Behavior-Disease Models. PLOS ONE. 2011; 6(8):23084. Rizzo C, Fabiani M, Amlôt R, Hall I, Finnie T, Rubin GJ, Cucuiu R, Pistol A, Popovici F, Popescu R, Joose V, Auranen K, Leach S, Declich S, Pugliese A. Survey on the Likely Behavioural Changes of the General Public in Four European Countries During the 2009/2010 Pandemic In: Manfredi P, D'Onofrio A, editors. Modeling the Interplay Between Human Behavior and the Spread of Infectious Diseases. New York: Springer-Verlag: 2013. p. 23–41. Funk S, Bansal S, Bauch CT, Eames KTD, Edmunds WJ, Galvani AP, Klepac P. Nine challenges in incorporating the dynamics of behaviour in infectious diseases models. Epidemics. 2015; 10:21–5. Verelst F, Willem L, Beutels P. Behavioural change models for infectious disease transmission: a systematic review (2010–2015). J R Soc Interface. 2016; 13(125):20160820. Zhang T, Fu X, Kwoh CK, Xiao G, Wong L, Ma S, Soh H, Lee GK, Hung T, Lees M. Temporal factors in school closure policy for mitigating the spread of influenza. J Public Health Policy. 2011; 32(2):180–97. doi:10.1057/jphp.2011.1. Zhang T, Fu X, Xiao G, Wong L, Kwoh CK, Kee G, Lee K, Hung T. Evaluating Temporal Factors in Combined Interventions of Workforce Shift and School Closure for Mitigating the Spread of Influenza. 2012; 7(3). doi:10.1371/journal.pone.0032203. Chen SC, You ZS. Social contact patterns of school-age children in Taiwan: comparison of the term time and holiday periods. Epidemiol Infect. 2015; 143(6):1139–47. doi:10.1017/S0950268814001915. Luh DL, You ZS, Chen SC. Comparison of the social contact patterns among school-age children in specific seasons, locations, and times. Epidemics. 2016; 14:36–44. doi:10.1016/j.epidem.2015.09.002. Santermans E, Kerckhove KV, Azmon A, Edmunds WJ, Beutels P, Faes C, Hens N. Structural differences in mixing behavior informing the role of asymptomatic infection and testing symptom heritability. Math Biosci. 2017; 285(Supplement C):43–54. doi:10.1016/j.mbs.2016.12.004. Cauchemez S, Donnelly CA, Reed C, Ghani AC, Fraser C, Kent CK, Finelli L, Ferguson NM. Household transmission of 2009 pandemic influenza a (h1n1) virus in the united states. N Engl J Med. 2009; 361(27):2619–27. Goeyvaerts N, Hens N, Ogunjimi B, Aerts M, Shkedy Z, Damme PV, Beutels P. Estimating infectious disease parameters from data on social contacts and serological status. J R Stat Soc: Ser C: Appl Stat. 2010; 59(2):255–77. doi:10.1111/j.1467-9876.2009.00693.x. We thank Shweta Bansal for useful discussions on the study. We thank the Belgian Scientific Institute of Public Health for making the surveillance data available to us. The present work was partially supported by the French ANR project HarMS-flu (ANR-12-MONU-0018) to GDL and VC; the EC-Health project PREDEMICS (Contract No. 278433) to VC; the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 682540 - TransMID) to NH, PC; the University of Antwerp scientific chair in Evidence-Based Vaccinology to NH; the Antwerp Study centre for Infectious Diseases (ASCID) in 2009-2016 to NH; the IAP Research Network P7/06 of the Belgian State (Belgian Science Policy) to KVK, NH; the PHC Tournesol Flanders program no.35686NE to GDL, KVK, CP, NH, VC. Commuting data is available upon request from the Directorate General Statistics and Economic Information (DGSEI) [62]. Demographic data is publicly available from Belgian Statistics [54]. Surveillance data is available upon request from the Belgian Scientific Institute of Public Health. Contact data are reported in the paper. Sorbonne Universités, UPMC Univ. Paris 06, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique (IPLESP UMR-S 1136), Paris, 75012, France Giancarlo De Luca , Chiara Poletto & Vittoria Colizza Interuniversity Institute for Biostatistics and Statistical Bioinformatics, Hasselt University, Agoralaan Gebouw D, Diepenbeek, 3590, Belgium Kim Van Kerckhove , Pietro Coletti & Niel Hens Scientific Institute of Public Health (WIV-ISP), Public Health and Surveillance Directorate, Epidemiology of infectious diseases Service, Rue Juliette/Wytsmanstraat 14, Brussels, 1050, Belgium Nathalie Bossuyt Centre for Health Economics Research and Modelling Infectious Diseases, Vaccine and Infectious Disease Institute, University of Antwerp, Universiteitsplein 1, Wilrijk, 2610, Belgium Niel Hens ISI Foundation, Torino, 10126, Italy Vittoria Colizza Search for Giancarlo De Luca in: Search for Kim Van Kerckhove in: Search for Pietro Coletti in: Search for Chiara Poletto in: Search for Nathalie Bossuyt in: Search for Niel Hens in: Search for Vittoria Colizza in: NH, VC and CP conceived and designed the study. KVK analyzed the contact data. GDL developed the model, performed the numerical simulations, and drafted the first version of the manuscript. GDL and PC performed the statistical analysis and prepared the figures. NB interpreted the results. All authors contributed to the interpretation of the results, edited and approved the final manuscript. Correspondence to Vittoria Colizza. The University of Antwerp scientific chair in Evidence-Based Vaccinology (NH) is sponsored by a gift from Pfizer and GSK. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The Additional File provides additional details on the modeling approach, the calibration procedure, as well as additional validation results. (PDF 839 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Luca, G.D., Kerckhove, K.V., Coletti, P. et al. The impact of regular school closure on seasonal influenza epidemics: a data-driven spatial transmission model for Belgium. BMC Infect Dis 18, 29 (2018) doi:10.1186/s12879-017-2934-3 Epidemic modeling Spatial transmission
CommonCrawl
Journal Prestige (SJR): 0.726 Citation Impact (citeScore): 1 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1617-9447 - ISSN (Online) 2195-3724 Published by Springer-Verlag [2467 journals] Geometric Julia–Wolff Theorems for Weak Contractions Free pre-print version: Loading... Rate this result: What is this? Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article. Abstract: Abstract In this paper we review the familiar collection of results that concern holomorphic maps of a disc or half-plane into itself that are due to Schwarz, Pick, Julia, Denjoy and Wolff. We give a coherent geometric treatment of these results entirely in terms of the ideas of geodesics, horocycles and G-spaces as introduced by Busemann. In particular, we show that the results of Wolff and Julia hold for all weak contractions of the hyperbolic metric (whether holomorphic or not); holomorphicity plays no role in the arguments. These results apply to holomorphic maps because the Schwarz–Pick lemma implies that holomorphic maps are weak contractions. An important ingredient in the proofs are several projections of the hyperbolic plane onto a geodesic which are weak contractions relative to the hyperbolic distance. PubDate: 2022-12-23 The Range of Hardy Numbers for Comb Domains Abstract: Abstract Let \(D\ne \mathbb {C}\) be a simply connected domain and f be a Riemann mapping from \(\mathbb {D}\) onto D. The Hardy number of D is the supremum of all p for which f belongs in the Hardy space \({H^p}\left( \mathbb {D} \right) \) . A comb domain is a domain whose complement is the union of an infinite number of vertical rays symmetric with respect to the real axis. In this paper we prove that, for \(p>0\) , there is a comb domain with Hardy number equal to p if and only if \(p\in [1,+\infty ]\) . It is known that the Hardy number is related to the moments of the exit time of Brownian motion from the domain. In fact, Burkholder proved that the Hardy number of a simply connected domain is twice the supremum of all \(p>0\) for which the p-th moment of the exit time of Brownian motion is finite. Therefore, our result implies that given \( p < q\) there exists a comb domain with finite p-th moment but infinite q-th moment if and only if \(q\ge 1/2\) . This answers a question posed by Boudabra and Markowsky. A Note on Electrified Droplets Abstract: Abstract We give an in-depth analysis of a 1-parameter family of electrified droplets first described in [19]. We also investigate a technique for searching for new solutions to the droplet equation, and rederive via this technique a 1 parameter family of physical droplets, which were first discovered by Crowdy [4]. We speculate on extensions of these solutions, in particular to the case of a droplet with multiple connected components. The Geometric Characterizations for a Combination of Generalized Struve Abstract: Abstract In the present paper, we establish geometric properties, such as starlikeness and convexity of order \(\alpha \) ( \(0 \le \alpha < 1\) ), and close-to-convexity in the open unit disk \({\mathbb {U}} := \{z \in {\mathbb {C}}: z <1\}\) for a combination of a normalized form of the generalized Struve function of order p, \(w_{p,b,c}(z)\) , defined by \(D_{p,b,c}(z) = 2^{p}\sqrt{\pi } \, \Gamma (p + b/2 + 1) z^{(-p+1)/2}d_{p,b,c}(\sqrt{z})\) , where \(d_{p,b,c}(z) := -pw_{p,b,c}(z)+zw_{p,b,c}^{\prime }(z)\) , with \(p, b, c \in {\mathbb {C}}\) and \(\kappa = p + b/2+1 \notin \{0,-1,-2,\dots \}\) . We determine conditions for the parameters c and \(\kappa \) for which \(f \in {\mathcal {R}}(\beta ) = \left\{ f \in {\mathcal {A}}({\mathbb {U}}): \mathrm{Re} f^{\prime }(z) > \beta , z \in {\mathbb {U}} \right\} \) , \(0 \le \beta <1\) , indicates that the convolution product \(D_{p,b,c}*f\) belongs to the spaces \({\mathcal {H}}^{\infty }({\mathbb {U}})\) and \({\mathcal {R}}(\gamma )\) with \(\gamma \) depending on \(\alpha \) and \(\beta \) , where \({\mathcal {A}}({\mathbb {U}})\) denotes the class of all normalized analytic functions in \({\mathbb {U}}\) and \({\mathcal {H}}^{\infty }({\mathbb {U}})\) is the space of all bounded analytic functions in \({\mathcal {A}}({\mathbb {U}})\) . We also obtain sufficient conditions in terms of the expansion coefficients for \(f \in {\mathcal {A}}({\mathbb {U}})\) to be in some subclasses of the class of univalent functions. Motivation has come from the vital role of special functions in geometric function theory. Common Universal Meromorphic Functions for Translation and Dilation Abstract: Abstract We consider translation and dilation mappings acting on the spaces of meromorphic functions on the complex plane and the punctured complex plane, respectively. In both cases, we show that there is a dense \(G_{\delta }\) -subset of meromorphic functions that are common universal for certain uncountable families of these mappings. While a corresponding result for translations exists for entire functions, our result for dilations has no holomorphic counterpart. We further obtain an analogue of Ansari's Theorem for the mappings we consider, which is used as a key tool in the proofs of our main results. Homeomorphisms of Finite Metric Distortion Between Riemannian Manifolds Abstract: Abstract The theory of multidimensional quasiconformal mappings employs three main approaches: analytic, geometric (modulus) and metric ones. In this paper, we use the last approach and establish the relationship between homeomorphisms of finite metric distortion (FMD-homeomorphisms), finitely bi-Lipschitz, quasisymmetric and quasiconformal mappings on Riemannian manifolds. One of the main results shows that FMD-homeomorphisms are lower Q-homeomorphisms. As an application, there are obtained some sufficient conditions for boundary extensions of FMD-homeomorphisms. These conditions are illustrated by several examples of FMD-homeomorphisms. Weighted Uniform Convergence of Entire Grünwald Operators on the Real Abstract: Abstract We consider weighted uniform convergence of entire analogues of the Grünwald operator on the real line. The main result deals with convergence of entire interpolations of exponential type \(\tau >0\) at zeros of Bessel functions in spaces with homogeneous weights. We discuss extensions to Grünwald operators from de Branges spaces. Approximation by Faber–Laurent Rational Functions in Variable Exponent Morrey Spaces Abstract: Abstract Let G be a finite Jordan domain bounded by a Dini-smooth curve \(\Gamma \) in the complex plane \({\mathbb {C}}\) . In this work, approximation properties of the Faber–Laurent rational series expansions in variable exponent Morrey spaces \(L^{p(\cdot ),\lambda (\cdot )}(\Gamma )\) are studied. Also, direct theorems of approximation theory in variable exponent Morrey–Smirnov classes, defined in domains with a Dini-smooth boundary, are proved. Essential Norm of Difference of Composition Operators from Analytic Besov Spaces to Bloch Type Spaces Abstract: Abstract The boundedness of the difference of composition operators acting from the analytic Besov spaces to the Bloch type spaces is characterized. Some upper and lower bounds for the essential norm of the operator are also given. Results on Certain Difference Polynomials and Shared Values Abstract: Abstract In this paper, we study uniqueness questions for meromorphic functions for which certain difference polynomials share a finite non-zero value, and give mathematical expressions for the meromorphic functions in the conclusions of the main results in the present paper, which are the related to the questions studied in Li–Yu (Bull Korean Math Soc 55(5):1529–1561, 2018). Lawrence Allen Zalcman 1943–2022 Volterra-Type Integration Operators Between Weighted Bergman Spaces and Hardy Spaces Abstract: Abstract Given an analytic function g and a \(\mathcal {D}\) weight \(\omega \) on the unit disk \(\mathbb {D}=\{z \in \mathbb {C} : z <1\}\) , we characterize the boundedness and compactness of the Volterra-type integration operator $$\begin{aligned} J_{g}(f)(z)=\int _{0}^{z}f(\lambda )g'(\lambda )d\lambda \end{aligned}$$ between the weighted Bergman spaces \(L_{a}^{p}(\omega )\) and the Hardy spaces \(H^{q}\) for \(0<p,q<\infty \) . Paatero's Classes V(k) as Subsets of the Hornich Space Abstract: Abstract In this article we consider Paatero's classes V(k) of functions of bounded boundary rotation as subsets of the Hornich space \(\mathcal H\) . We show that for a fixed \(k\ge 2\) the set V(k) is a closed and convex subset of \(\mathcal H\) and is not compact. We identify the extreme points of V(k) in \(\mathcal H\) . On a Functional Inequality of Alzer and Salinas Abstract: Abstract We deal with the functional inequality $$\begin{aligned} f(x)f(y) - f(xy) \le f (x) + f (y) - f(x+y) \end{aligned}$$ for \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) , which was introduced by Horst Alzer and Luis Salinas. We show that if f is a solution that is differentiable at 0 and \(f(0)=0\) , then \(f=0\) on \({\mathbb {R}}\) or \(f(x) = x\) for all \(x \in {\mathbb {R}}\) . Next, we prove that every solution f which satisfies some mild regularity and such that \(f(0)\ne 0\) is globally bounded. F. Wiener's Trick and an Extremal Problem for $$H^p$$ H p Abstract: Abstract For \(0<p \le \infty \) , let \(H^p\) denote the classical Hardy space of the unit disc. We consider the extremal problem of maximizing the modulus of the kth Taylor coefficient of a function \(f \in H^p\) which satisfies \(\Vert f\Vert _{H^p}\le 1\) and \(f(0)=t\) for some \(0 \le t \le 1\) . In particular, we provide a complete solution to this problem for \(k=1\) and \(0<p<1\) . We also study F. Wiener's trick, which plays a crucial role in various coefficient-related extremal problems for Hardy spaces. DOI: 10.1007/s40315-022-00469-x Lawrence Allen Zalcman: List of Publications New Subclasses of Univalent Mappings in Several Complex Variables: Extension Operators and Applications Abstract: Abstract In this paper we define new subclasses of univalent mappings in the case of several complex variables. We will focus our attention on a particular class, denoted \(E^*_1\) , and observe that in the case of one complex variable, \(E^*_1(U)\) coincides with the class of convex functions K on the unit disc. However, if \(n\ge {2}\) , then \(E^*_1(\mathbb {B}^n)\) is different from the class of convex mappings \(K(\mathbb {B}^n)\) on the Euclidean unit ball \(\mathbb {B}^n\) in \(\mathbb {C}^n\) . Along with this, we will study other properties of the class \(E^*_1\) on the unit polydisc, respectively on the Euclidean unit ball in \(\mathbb {C}^n\) . In the second part of the paper we discuss the Graham–Kohr extension operator \(\Psi _{n,\alpha }\) (defined by Graham and Kohr in Complex Variab. Theory Appl. 47:59–72, 2002). They proved that the extension operator \(\Psi _{n,\alpha }\) does not preserve convexity for \(n\ge {2}\) for all \(\alpha \in [0,1]\) . However, in this paper we prove that \(\Psi _{n,0}(K)\) and \(\Psi _{n,1}(K)\) are subsets of the class \(E^*_1(\mathbb {B}^n)\) which is different from the class \(K(\mathbb {B}^n)\) for the Euclidean case. A Jentzsch-Theorem for Kapteyn, Neumann and General Dirichlet Series Abstract: Abstract Comparing phase plots of truncated series solutions of Kepler's equation by Lagrange's power series with those by Bessel's Kapteyn series strongly suggests that a Jentzsch-type theorem holds true not only for the former but also for the latter series: each point of the boundary of the domain of convergence in the complex plane is a cluster point of zeros of sections of the series. We prove this result by studying properties of the growth function of a sequence of entire functions. For series, this growth function is computable in terms of the convergence abscissa of an associated general Dirichlet series. The proof then extends, besides including Jentzsch's classical result for power series, to general Dirichlet series, to Kapteyn, and to Neumann series of Bessel functions. Moreover, sections of Kapteyn and Neumann series generally exhibit zeros close to the real axis which can be explained, including their asymptotic linear density, by the theory of the distribution of zeros of entire functions. DOI: 10.1007/s40315-022-00468-y Generalization of Proximate Order and Applications Abstract: Abstract We introduce a concept of a quasi proximate order which is a generalization of a proximate order and allows us to study efficiently analytic functions whose order and lower order of growth are different. We prove an existence theorem for a quasi proximate order, i.e. a counterpart of Valiron's theorem for a proximate order. As applications, we generalize and complement some results of M. Cartwright and C. N. Linden on asymptotic behavior of analytic functions in the unit disc. Comparison and Möbius Quasi-invariance Properties of Ibragimov's Metric Abstract: Abstract For a domain \( D \subsetneq {\mathbb {R}}^{n} \) , Ibragimov's metric is defined as $$\begin{aligned} u_{D}(x,y) = 2\, \log \frac{ x-y +\max \{d(x),d(y)\}}{\sqrt{d(x)\,d(y)}}, \quad \quad x,y \in D, \end{aligned}$$ where d(x) denotes the Euclidean distance from x to the boundary of D. In this paper, we compare Ibragimov's metric with the classical hyperbolic metric in the unit ball or in the upper half space, and prove sharp comparison inequalities between Ibragimov's metric and some hyperbolic type metrics. We also obtain several sharp distortion inequalities for Ibragimov's metric under some families of Möbius transformations.
CommonCrawl
Assessment of a decentralized grid-connected photovoltaic (PV) / wind / biogas hybrid power system in northern Nigeria Ismail Abubakar Jumare ORCID: orcid.org/0000-0003-0316-52411,2, Ramchandra Bhandari3 & Abdellatif Zerga2 Electricity is considered a fundamental service which is highly correlated with sustainable development. Nigeria will serve as a case study that has been experiencing an energy deficit, and severely needs a strong adoption of alternative energy sources. This paper provides a detailed assessment of a grid-connected photovoltaic/wind/biogas hybrid energy system in the northern part of Nigeria using a combined Hybrid Optimization Model for Electric Renewables (HOMER), Microsoft Excel, and Ganzleitliche Bilanz (GaBi) tools. They are based on techno-economic modeling and optimization as well as comparison with the same configuration in its off-grid form. Sensitivity analysis as well as an energy efficiency assessment of the proposed grid-connected system was carried out, followed by a supplementary economic benefit assessment of a system switch over and an evaluation of the impacts of life cycle emissions. A wrap-up reliability assessment based on the utility grid status quo and policy implications was also carried out. The results of the analysis for the grid-connected system showed a 3% increase in the overall energy supply, and a 68% and 85% decrease in net present costs (NPC) and levelized costs of energy (LCOE), respectively, with avoided emissions as compared to its comparable off-grid configuration. Moreover, the energy efficiency (EE) determined for the proposed grid-connected system resulted in a massive reduction in the component sizing, energy supply, and an ultimate 88% and 81% reduction in overall NPC and LCOE, respectively. The sensitivity analysis as well as the other supplementary evaluations indicated clear impacts on the different performance measures. This approach is worthy of adoption coupled with expansions for an effective solution to the energy deficit and its sustainability in the case study country. This could be successfully provided if all the reliability concerns for the utility grid and policy measures are addressed significantly. Hybrid renewable systems deserve strong consideration for grid integration on a sustainability basis. Energy supply increased by 3% from the standalone to the proposed grid-connected system. An energy efficiency measure for the grid-connected system led to an 81% reduction in the LCOE. Sensitivity and other supplementary analysis showed impacts on system performance. Surmounting utility grid challenges and strong policy interventions are necessary. Numerous research studies conducted in the field of energy have shown the depleting nature of conventional energy sources, especially fossil fuels, coupled with direct consequences of global warming. This necessitates searching for alternatives in energy solutions. These alternative energy sources are in other words termed renewable energy sources such as solar, wind, hydro, biomass, and geothermal energy. However, the combination of two or more of these sources is sometimes necessary for giving rise to a hybrid energy system. Hence, by definition, a hybrid energy system is the combination of two or more energy conversion devices aimed at overcoming limitations associated with either or all [1]. The major limitation of renewable systems and their sources has been intermittent availability, as some resources are available in stock while some fluctuate. The hybrid system has some advantages due to an incorporation of renewable sources as described in the literature. These are fuel flexibility due to different adjustments that could be provided in combination to ensure optimum operation and efficiency of systems as well as reliability, and viability in terms of economics, energy security, improved power quality, reduced carbon emission, fossil fuels saving, and employment opportunity [1, 2]. In addition, a power generating system could be either decentralized (distributed) or centralized. The former involves having different sets of power generating systems for different load demands, and is the intended target for this research paper. However, the latter involves securing a single power plant to one or many load centers without the need for distribution in the system execution [3]. Centralized power generation could be relatively more challenging than the decentralized type due to its high costs of execution and more losses of operation. This is because the power has to be transported either on a national/regional utility grid or a mini/isolated grid depending on the network category. Likewise, still on the basis of a network, the power system may be single component-based or hybrid-based and could be conventionally designed in two ways, viz. grid-connected and off-grid or standalone. The grid-connected hybrid system works in such a way that the power generated will be integrated in a grid network on either the transmission, sub-transmission, or distribution site of the network, and the load gets its power from the grid or from the system directly where excesses are forwarded to the grid and deficits require grid power sourcing. The major advantage of the grid-connected system is the fact that flexibility exists in such a way that a loss or shutdown of the system does not necessarily result in a loss of power for the load, since such losses or outages could be compensated by other alternatives in the utility grid [4]. Likewise, excess generation—when compared to electricity consumed from the grid—results in credits in line with the renewable power policy instruments and is based on countries' regulations. In contrast, off-grid-based systems are usually deployed in remote areas, i.e., areas that are far away from the existing grid where the grid extension to those locations is technically or economically impossible or challenging [4]. It has less impact as compared to a grid-connected system due to the flexibility and credits securing advantages, which are not particular to it. Based on the above information, the design approach generally performed for any hybrid power system is stage-wise, and usually begins with an energy demand assessment, resource assessment, assessment of the barriers/constraints in terms of costs, the environmental influences, etc., and finally it has to fulfill the demands of an energy system coupled with optimization and so on. This can be addressed using different software packages, such as a Hybrid Optimization Model for Electric Renewables (HOMER), a Matrix Laboratory (MatLab/Simulink), a System Advisor Model (SAM), a Transient System (TRNSYS), or a Ganzleitlichen Bilanz (GaBi) tool. They ease the modeling, optimization and control, economic analysis, life cycle assessments, and so on. Adopting two or more of these software packages becomes necessary depending on the research questions to be addressed in a power system design, as limitations may arise in handling only one. Many studies regarding a grid-connected power system have been carried out both on the African continent and beyond. Some could be used to underline the novelty of this research paper. Pan and Dinter [5] have demonstrated the capability of a concentrating solar power (CSP) and PV hybrid system of 100 MW nameplate capacity by addressing the need of a 100-MW base load capacity for the grid in South Africa. This analysis was performed using SAM for simulating different design configurations both individually and in combined form based upon different storage sizes for observing the energy yield, capacity factor, and economic viability. Gbalimene et al. [6] have studied the techno-economic analysis for grid integration of hybrid-based renewable energy technologies in order to satisfy the load distribution of a particular building with a peak load of about 305 kW in Abuja, Nigeria. The components considered were PV/wind without battery storage which was analyzed using HOMER. Simulation and optimization have been carried out, and different feasible configurations have been obtained. In addition, Numbi and Malinga [7] have proposed an economic analysis of a 3 kW residential single-phase grid interactive solar PV system in eThekwini municipality of South Africa. The approach used was the optimal control model, which is a powerful tool for solving several energy management problems. In the simulation results, variations were done for the feed-in tariff (FiT) for observing the impact on energy cost savings and the payback period. An optimal grid-connected hybrid PV/wind with battery storage system sizing was performed by Nadjemi et al. [8], considering two load distributions, i.e., a residential and a dairy farm all located in Ghardaia, Algeria. The analysis has been done using a cuckoo search algorithm and has been compared with the particle-swarm sizing optimization (PSO) technique, revealing a better accuracy and less computational time compared to the PSO technique. Boussetta et al. [9] have conducted a grid-connected optimal sizing of a hybrid system for 2 load profiles (one with a 379 kWh/day average energy consumption and the other with 113 kWh/day) for an agricultural farm located in Morocco. In the analysis, the authors used the HOMER tool and the components considered were PV, wind, diesel generator, and battery. Madhlopa et al. [10] have studied the optimization of a PV/wind hybrid system under limited water resource conditions using meteorological data of Stellenbosch, South Africa. The plant was designed to generate 100,000 MWh/year of energy for the grid, where the model employed was based on the water constrains of a program developed in MatLab for the economic optimization of the proposed system. Moreover, Silinga et al. [11] conducted a study with regard to the implication of a proposed hybrid CSP peaking system (i.e., a capacity beyond the base load for the grid system) with a capacity of 3.3 MW in South Africa. This was done through re-optimization and comparison between the fixed tariff and 2-tier tariff system, using the spatial-temporal analysis approach. Kazem and Khatib [12] have studied the techno-economic assessment of a grid-integrated photovoltaic system in Sohar, Oman. The authors have applied the MatLab tool and analyzed many parameters, such as annual yield factor, capacity factor, and costs of energy generation. Likewise, the system has been found to be very promising for the site. Optimal sizing of a hybrid grid-connected PV/wind/biomass power system has been carried out by Gonzalez et al. [13] for the case of Central Catalonia, Spain. The life cycle cost optimization approach followed in the research has used the optimization toolbox of MatLab, coupled with a sensitivity analysis of some system cost variables and component efficiencies. The optimized configuration was concluded to be of benefit in terms of energy autonomy and environmental quality improvement. Salahi et al. [14] have completed a study regarding the design of a grid-connected hybrid system for the case of Bishesh Village, Iran, based on a peak load of 146 kW. HOMER used different configurations and simulated them using PV, wind, and battery as well as diesel gensets and battery with both the grid connection and comparable off-grid for observing the benefits associated. Dali et al. [15] conducted an experimental study in testing and managing the performance of a hybrid PV/wind system. The authors have used physical emulators, battery storage, local load, dSpace controller, and a grid-tie inverter that is also capable of operating in standalone mode. The system has proved to be able to demonstrate operational capability and effectiveness at both a grid-connection mode and an autonomous mode. Lastly, Nurunnabi and Roy [16] have carried out a study on grid-connected PV/wind with battery storage in Bangladesh for an analyzed peak load of 101.32 kW. The authors have applied the HOMER tool and compared the grid-connected configuration with its off-grid form and the benefits of such proposition were realized in terms of the system's economics. Based on the existing work in the research area with some of them discussed in the preceding paragraph, the purpose of this paper is to indicate the techno-economic and emission impacts of integrating a photovoltaics/wind/biogas hybrid system to the grid for the site of Zaria in northern Nigeria. It is further mentioned that the novelty of this research work has been seen in aspects with regard to the demand side energy efficiency assessment, the economic benefits assessment of the energy transitioning adopted with Microsoft Excel, including the GaBi tool life cycle emissions analysis of the systems' transitioning, as well as the reliability arguments brought forth with respect to the utility grid case and the policy implications as a qualitative measure in the case study country. However, the basis behind choosing the solar PV, wind power, and the biomass-biogas power system components are found firstly in the solar and wind resource potentials of the country being more concentrated in the northern part where the study was conducted, and secondly found in the general availability of wastes that could be turned into useful energy with the aim of ensuring waste minimization for environmental saving. Likewise, the need to integrate renewable energy into the energy system operation and to diversify the energy sources with the ultimate goal of improving energy supply and quality of lives in the case study country is of great concern. Table 1 below presents a clear comparison of the reviewed grid-connected studies with the study of this paper for a clear visibility of the contribution and novelty in ascertaining the gap filled in line of the research domain. Table 1 Comparison of the grid-connected renewable energy system studies reviewed in this paper Following the introduction, the paper has been structured in different sections, namely the study site description and energy resource assessment, the site's load demand evaluation, the different components of the power system models with wrap-up economic models applied, the adopted research methods, the detailed and explicit results and the discussion in line with the methods specified, and lastly the conclusion section. Selected site description and energy resource assessment Firstly, Nigerian's electricity situation has really been critical based on the electricity consumption analyzed as 129.04 kWh/Cap./year during 2016 [17, 18]. This is equivalent to a consumption of 0.35 kWh/Cap./day and being tagged with a low electrification rate. This has been the major motivation towards the choice of the country as a joint intervention using the endowed renewable energy resources. However, in a more specific case of the study, the selected site is Zaria (coordinates 11.085° N, 7.72° E), which is a local government and major city in Kaduna State of northern Nigeria. This has been further driven by the fact that despite the whole country suffering with a high energy deficit, some regions tend to be in a more critical situation than others. From experience, this selected site is faced with frequent power cuts and most households rely on gasoline or diesel generator sets to address their power shortages. The negative impacts of the generator sets are numerous, e.g., air and noise pollution resulting in health hazards and environmental degradation due to oil spillage on land and water and excessive greenhouse gas emissions. The site of a further description is situated on a plateau at an elevation of 670 m above sea level [19] and has a total area of 563 km2 and a population of about 975,200 in 2015 [20]. Furthermore, Zaria's climate is tropical wet and dry caused by movement of inter-tropical discontinuity under two air mass influences, i.e., tropically continental and tropically maritime [21]. The wet season (summer) lasts from April to October, whereas the dry season (winter) lasts from November to March. Figure 1 gives the country and study site description on a map. Map of Nigeria showing the study site [22] Renewable resource information about the site is crucial for the system analysis. Solar irradiance with the accompanied temperature and wind speed are the fundamental climate data considered. They are presented in Figs. 2 and 3: Average monthly solar irradiation and air temperature for the site [23] Average monthly wind speed for the site at 50 m [23] Furthermore, after switching to a biomass resource as very substantial to the power system, the breakdown of the different kinds of feedstock production for the country as well as the analyzed average production for the site is provided in Table 2. Table 2 Animal waste production: the country and the analyzed site values in 2014 [24,25,26,27,28] Density of clean biogas at standard temperature and pressure (stp) ranges from 1.1 to 1.5 kg/m3 [29], where 1.2 kg/m3 was applied in the additional evaluations of the Table 2 in view of the modeling data for biomass Load demand evaluations for the site The aim of the hybrid power system design is to address the energy situation of the specified site by supplying grid-connected decentralized power to the population based upon given numbers of households with a load demand specification. Within the limit of this design, about 200 households were considered in the site with an average of six persons per household for the power system sizing. This is equivalent to supplying energy to 1200 persons in the site. The breakdown of the load demand is based on the list of appliances utilized at the household level on a daily basis and time of use. The appliances specified are a reflection of a careful monitoring of the site regarding life style. It is also noted that households' energy consumption and the likes are seasonally dependent as consumption in summer (wet season) differs from that of winter (dry season). Hence, a more realistic design approach requires taking that into account. Therefore, the load demand is specified for both the summer and winter for the sizing of the energy system components. Table 3 gives the details of the load calculation and Fig. 4 summarizes the load profile for the site based on the analyzed two seasons. Further random variability has also been considered as a safety factor for a more realistic design as presented in Table 4 which depicts how the summary of the analyzed load specification has been scaled. Table 3 Daily load demand analysis for the site in the summer and winter case Baseline load profile for the site during summer and winter Table 4 Supplementary load demand specifications for scaling The Power system component models and economic parameters Solar PV system models Models for solar PV systems are quite numerous. A report of the solar PV power output models based on different input parameters has been obtained from Adaramola et al. and Adaramola et al. [30, 31] as follows: $$ {P}_{pv}={Y}_{pv}{f}_{pv}\left(\frac{G_T}{G_{T, STC}}\right)\left[1+{\alpha}_p\left({T}_C-{T}_{C, STC}\right)\right] $$ where Ppv = solar PV output power (kW), Ypv = rated capacity of the PV array, i.e., its power output under STC (kW), fpv = PV derating factor (%), GT = solar radiation incident on PV array (kW/m2), GT, STC = incident solar radiation under standard test conditions (1 kW/m2), αp = temperature coefficient of power (%/°C), TC = PV cell temperature (°C), and TC, STC = PV cell temperature @ standard test condition (25 °C). On neglecting the effect of temperature, the power model becomes less complicated as follows: $$ {P}_{pv}={Y}_{pv}{f}_{pv}\left(\frac{G_T}{G_{T, STC}}\right) $$ With regard to the energy generation bit of the PV system, Kusakana and Vermark [32] have reported on a model for predicting such, based on multiple parameters in line with the preceded PV power determination as follows: $$ {E}_{PV}=A\times {\eta}_m\times {P}_f\times {\eta}_{PC}\times I $$ where EPV = total electrical energy output, A = total area of the photovoltaic generator (m2), ƞm = module efficiency (%), ƞPC = power conditioning efficiency (%), I = hourly irradiance (kWh/m2), Pf = parking factor. Wind turbine system models Many mathematical models also exist in predicting the performance of a wind turbine system. According to Madhlopa et al. and Taher et al. [10, 33], the models are based on different conditions for estimating the power output of a typical wind turbine are as follows: $$ \kern0.5em {P}_{WT}\kern0.5em =\left\{\begin{array}{c}a{V}^3-b{P}_{rt}\kern2em {V}_{ci}<V\le {V}_{rt}\\ {}\ {P}_{rt}\kern1.75em {V}_{rt}<V<{V}_{co}\\ {}0\kern3.75em V>{V}_{co}\end{array}\right. $$ such that \( a=\frac{P_{rt}}{V_{rt}^3-{V}_{ci}^3}\ \mathrm{and}\ b=\frac{V_{ci}^3}{V_{rt}^3-{V}_{ci}^3} \) where PWT = wind turbine output power, Prt= rated power of the wind turbine, Vrt = rated wind speed, Vci = cut-in wind speed, Vco = cut-out wind speed $$ {P}_{WT}=\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.\rho {AV}^3{C}_p $$ where ρ = density of air = 1.225 kg/m3, A = wind turbine area = πr2(m2), where r = rotor radius (m), V = wind velocity (m/s), Cp = coefficient of power = Max. value is 0.59 Finally, a model for predicting the energy output of a wind turbine has been reported by Kusakana and Vermark [32] in terms of almost similar parameters to that of the power output. It is therefore presented below: $$ {E}_{WT}=1/2\times \rho \times {V}^3\times {C}_{pw}\times {\eta}_{WT}\times t\kern7.75em $$ where EWT = energy output of the wind turbine, Cpw = wind turbine performance coefficient, ƞWT = combined efficiency of wind turbine (%), t = time. Biomass genset system models The mathematical models for predicting the performance of the fuel ignition genset are also available. According to Adaramola et al. and Adaramola et al. [30, 31], some models for predicting the fuel consumption, total life, and efficiency of the genset system have been obtained and are presented below: $$ {F}_c=a{P}_{rated}+b{P}_{gen}\kern13.5em $$ where Fc = fuel consumption (L/h), Prated = rated power capacity of the generator (kW), Pgen = generator power output (kW), a = generator's fuel curve intercept coefficient (L/h/kWrated), and b = generator's fuel curve slope (L/h/kWoutput). $$ {R}_{gen}=\frac{Q_{running- time}}{Q_{year}}\kern14em $$ where Rgen = generators operational life (year), Qrunning − time = total running hours for the generator (h), Qyear = actual annual operating hours (h/year) $$ {\upeta}_{gen}=\frac{3.6{P}_{gen}}{{\dot{\mathrm{m}}}_{fuel}{LHV}_{fuel}} $$ such that \( {\dot{m}}_{fuel}={\rho}_{fuel}\left(\frac{F_c}{1000}\right) \) where ƞgen = generator's efficiency, \( {\dot{m}}_{fuel}= \)mass flow rate of the fuel (kg/h), ρfuel = density of the fuel (kg/m3), LHVfuel = latent heat of vaporization of the fuel. Finally, Kusakana and Vermark [32] put forward a model suitable for determining the total electrical energy generation from a fuel ignition generator as follows: $$ \mathrm{ElectricalEnergyOutput}\left({E}_G\right)={P}_{rated}\times {\eta}_{gen}\times t\kern7.25em $$ System economic parameters with their models Discount rates (real and nominal): these are interesting rates that are considered in a cash flow analysis, of which the real one takes inflation rate into account, where the nominal one neglects the effect of inflation. The following formula relates the 2 discount rates as put forward by Nurunnabi and Roy [16]. $$ \kern3em i=\frac{i^{\prime }-F}{1+F}\kern16.5em $$ where i = real discount rate, i′ = nominal discount rate, and F = annual inflation rate. Net present costs (NPC): this is defined as the aggregate of the capital costs and the discounted future costs incurred by the system over the entire life of the project. The model for evaluating such economic parameter has been provided in Eq. 12. In line with the NPC is the operating cost, where its formula is given in Eq. 13. $$ NPC=C+\sum \limits_{n=1}^N\frac{\mathrm{O}\&\mathrm{M}}{{\left(1+i\right)}^n}\kern12.75em $$ where C = capital/investment costs ($), O&M = operation and maintenance costs, i = discount rate/real discount rate, and N = project life time. $$ \mathrm{Operating}\ \mathrm{cost}=\mathrm{CRF}\left(i,{N}_{\mathrm{Project}}\right).\mathrm{NPC}-\mathrm{CRF}\left(i,{N}_{\mathrm{Project}}\right).C\kern5.75em $$ where CRF = capital recovery factor. Capital recovery factor (CRF): this critical parameter is relevant in calculating the value or cost of an annuity. It is represented by the below formula that was reported by Adaramola et al. [31]. $$ \mathrm{CRF}=\frac{i\times {\left(1+i\right)}^N}{{\left(1+i\right)}^N-1}\kern15.75em $$ Levelized costs of energy (LCOE): this could be defined as the total costs to generate a unit of energy for a system over its entire life. It could also be seen as the amount at which the energy must be sold to have a break-even. It is given by the below formula reported by the Fraunhofer Institute for Solar Energy [34] as applied. $$ \mathrm{LCOE}=\frac{I+{\sum}_{t=1}^n\frac{A_t\ }{{\left(1+i\right)}^n}}{\sum_{t=1}^n\frac{Mel}{{\left(1+i\right)}^n}}\kern12.25em $$ where I = capital costs/investment costs ($), At = annual total costs/operation and maintenance costs, and Mel = annual energy/electricity generated (kWh). The assessment approach adopted was a grid-connected solar PV/wind-turbine/biomass gasified power system without storage. The reason behind neglecting the storage system was due to the incorporation of a utility grid as a backup system. Hence, generations in excess of demand necessitates forward the excess energy to the grid. In short, the generation of the demand results is based upon the compensation of grid power to meet up with the demand. This configuration was then compared to its off-grid-based configuration where battery storage was incorporated as the backup system in order to clearly see the gap between the two scenarios for better decision-making. The systems' architecture is described in Fig. 5. Screenshot of a HOMER block diagram for the systems architecture From the systems' architecture figure, an obvious transition in power transmission to the demand side, utility grid, and battery storage is evident depending on the kind of power requirement based upon the established DC and AC supply buses. This is being taken into account by using a bi-directional inverter that works as both an inverter and a rectifier, depending on the power to be dispatched in operation. The inverter specification is given in the Appendix section in Table 14. In all cases, the "HOMER software" was used for the sizing, simulation, and optimization in obtaining the technically optimal parameters with the corresponding optimum configuration based on least NPC, and in line with all the analyzed design input parameters presented in the Appendix section. This included the utility grid input specification of Table 13, the input specification for the power system components of Table 14, and additional input specification for the biogas genset of Table 15. Further economic analysis regarding operating costs and LCOE determination for each system case were conducted using Microsoft Excel. The general description of how the HOMER software works in the system design based on the load specification of the components' modeling, optimization, and so on was clearly demonstrated in the model given in Fig. 6. HOMER model description in the design The operational principle in the energy management for the proposed grid-connected system is comprised of three stages. The first stage relates to the solar PV and wind turbine components focused on fulfilling the demand, and the third component representing the biogas genset is optimized in order to be automatically activated based on its minimum load ratio on occasions of insufficiency of solar PV and wind turbine components. The second stage relates to the grid intervention on occasions of total power deficit of the whole system in comparison to the load demand. Hence, the utility grid power is being sourced/purchased to meet the demand based on a defined limit. The third stage also relates to the grid intervention on occasions of total power of the system in excess of the load demand, where the surplus is sent/sold to the utility grid based on the defined limit. The management strategy is clearly described in the model presented in Fig. 7. Optimum energy management and operational principle model Sensitivity analysis was performed to investigate the grid-connected system based on some technical and economic parameters. The technical parameters were solely the climate-based resource data viz. the scaled annual average wind resource, the scaled annual average solar resource, with the accompanied scaled annual average ambient temperature, where an assumption of 5% decrease and 5% increase was provided to the original data. This is in view of possible fluctuations due to the high uncertainty of the climate data. The economic parameter considered was the discount rate being a strong determinant for the time value of money in the cash flow evaluations. The assumption to the baseline discount rate considered was a decrease and an increase of 1% and 2%, respectively, in the sensitivity. Likewise, an energy efficiency assessment was offered for the proposed optimized grid-connected configuration with further simulation and re-optimization using the "HOMER tool" for observing their impact. The focus was on the adjustment of the load demand by switching of appliances specifically for lighting and heating requirements. For the lighting aspect, switching was done from the already specified use of incandescent bulbs in the load calculations to the use of a "light emitting diode (LED)." However, for the heating aspect, the switching was specified from electric cooking and electric water heating regarding the use of an "improved biomass cook stove (IBCS)" for both cooking and water heating. In all the cases, the power demand and cost implications were analyzed and summarized in the Appendix section in Table 16. Furthermore, supplementary economic assessments were successfully performed using Microsoft Excel for analyzing the economic benefits associated with the switch from the comparable standalone system to the proposed grid-connected system, and also from the proposed grid-connected system to its energy efficiency measure. In the same vain, the assessment of supplementary emissions was successfully carried out using the GaBi tool for a further analysis of impact categories, e.g., global warming potential (GWP), acidification potential (AP), and ozone-layer depletion potential (ODP) indicators for the proposed grid-connected system and its energy efficiency measure all from the grid-only power supply, i.e., the power supply of the conventional system mixture, available in the utility grid of the country. This enables us to observe the overall environmental impact of the transition throughout a life cycle. Figure 17 of the Appendix section clearly illustrates the model applied in the GaBi tool for the analysis of the life cycle emission impacts. Finally, a wrap-up qualitative assessment focusing on the reliability issue for the utility grid and on overall policy implications, in such energy system practices, for the case study country was included. The project life was taken as 25 years, and the interest rate for the overall economic assessment in the study was assumed to be 6% as a conventional setting. The additional input data referenced in the methodology can be accessed in the Appendix section with citations where necessary. The results of the overall analyses for the hybrid energy system of the considered site in Nigeria were successfully obtained. These include the results for the proposed grid-connected system and the comparable off-grid system, sensitivity results, energy efficiency results, and the supplementary analysis results as follows: Optimization results of the proposed and the comparable system The categorized optimization results for the proposed grid connected system and the comparable off-grid system are presented in Tables 5 and 6. Table 5 Categorized optimized configurations for the comparable off-grid system Table 6 Categorized optimized configurations for the proposed grid-connected system The simulation and optimization results clearly revealed the most feasible optimized configuration with a PV of 1500 kW capacity, a converter of 1000 kW, 150 batteries, 30 wind turbines of the specified rating, and a biogas genset of 3500 kW capacity for the comparable off-grid scenario. This is in contrast to the proposed grid-connected system where its most feasible optimized configuration was a 2000 kW capacity for the PV component with its accompanied converter of a size of 1000 kW, 30 wind turbines with similar specified ratings, and a 2500 kW capacity for the biogas genset component. The in-depth results for the further technical, economic, and emission parameters are presented in Figs. 8, 9, 10, and 11. Technical parameter results for the proposed system and comparable system. Energy supply component ratio (off-grid system: PV 14.60%, wind T, 56.15%, and bio-genset 29.25%/proposed grid-connected system: PV 19.78%, wind T 57.04%, and bio-genset 23.18%) HOMER screenshots monthly average energy production patterns for the comparable off-grid system and the proposed grid-connected system Economic parameter results for both the proposed and the comparable system (Excel-based) The evaluated emissions for the proposed and the comparable system The results clearly show the other technical and economic parameters determined. Looking at the proposed grid-connected system, it is obvious that the total yearly energy supply amounted to 17,353 MWh, which incorporated both utility-grid sourced or purchased energy as well as the energy produced by the system components. The yearly energy consumption is found to be 14,978 MWh as the sum of the load utilization and grid utilization as well as excess generations. This is relatively comparable to the off-grid scenario, where the supplied energy from its system component is found to be slightly more and with more excess generations than that of the proposed grid-connected system. Moreover, the fuel consumption in favor of the proposed grid-connected system has obviously reduced by around 40% due to an obvious reduction in the optimized capacity rating for the biogas genset from 3500 to 2500 kW. These technical performance parameters observed must certainly affect the economics of the system resulting in a huge reduction in the NPC as well as the LCOE values by roughly 68% and 67%, respectively. The environmental or emission parameter has further shown more benefits in the grid-connected system, in which the greenhouse gas emission value became negative as compared to the off-grid's slightly positive value. The implication of the negative greenhouse gas emission of the system is the avoided emission as a result of the grid interaction, based on the substituted fossil power from the grid that is a high contributor to greenhouse gas emissions at the operational stage. The specified positive emission value for the comparable off-grid case was due to the presence of the biogas genset with its associated direct emission at the operational level as compared to the life cycle basis where the direct emissions turned to neutral. The emission evaluation formulae for the two systems are displayed in the figure of emissions, i.e., Fig. 11. Sensitivity results for the analysis of the proposed grid-connected system The sensitivity analysis results were successfully confirmed for the different parameters considered. When starting with the economic-based sensitivity and varying the discount rate obviously affected the operating costs, and ultimately the NPC (that is also linked with the operating costs and the LCOE) as shown in Table 7). It is clear that an increase in the discount rate decreases the NPC, as well as ultimately the LCOE and the operating costs. Table 7 Discount rate sensitivity results When turning our attention to the technical and climate-based parameters, beginning with the scaled annual average solar irradiation sensitivity result as presented in Table 8, it is obvious that a change affected many other parameters in the system performance. The scaled annual average irradiance increase only influences the optimized sizing for the system component at 6.06 kWh/m2/day, where the sizing for solar PV and the bio-genset changed. Likewise, the solar PV energy production increased with an increase in the irradiance value all throughout, which triggered a decrease in the bio-genset production due to the flexible nature of the operating hours for the genset when being optimized in dependence on the energy supply of other components. The irradiance changes also affected the economic parameters as well as the grid energy purchase and the sales with a decrease for every increase in the irradiance value. When considering the scaled annual wind speed variations given in Table 9, the optimized sizing for solar PV would be affected. This is true in view of re-adjustments of other components for meeting the demand in a most economic manner. The energy production values for the different components all varied, which affected the economic parameters as well as the grid energy purchases and sales. The last parameter considered in the sensitivity was the ambient temperature that is linked to the irradiation data in the modeling. The results are listed in Table 10. These parameters indicate that the solar PV energy supply was affected in an inverse proportion manner. This is due to the increased temperature impact on the performance of solar PV modules which lowers their efficiencies. The bio-genset energy supply was observed to increase based on the hours of operation changed for ensuring the most economically optimum generation. Ultimately, the grid energy purchase and sales were also modified but mostly in a decreasing manner. Table 8 Scaled annual average solar resources sensitivity results Table 9 Scaled annual average wind resources sensitivity results Table 10 Scaled annual average ambient temperature sensitivity results Results of energy efficiency (EE) assessment The analyzed input specifications with regard to the energy efficiency assessment are given in the Appendix section of Table 16 The detailed breakdown of the results is presented in Figs. 12, 13, 14, and 15 for the in-depth technical, economic, and emission aspects. The baseline optimized configurations for the proposed grid-connected system previously worked out included a PV (2000 kW), a converter (1000 kW), 30 wind turbines of the same specified rating, and a biogas genset (2500 kW). The optimized configurations achieved by an energy efficiency analysis revealed a reduction of the genset component to a capacity of 800 kW, and a reduced solar PV component size of up to 400 kW, when a converter of 200 kW was used and the size of the wind turbine was left unchanged. Technical parameter results for the proposed system and its EE measures. Energy supply component ratio (proposed grid-connected system: PV 19.78%, wind T 57.04%, and bio-genset 23.18%/proposed grid-connected system + EE: PV 6.29%, WT 90.74%, and bio-genset 2.97%) HOMER screenshot monthly average energy production patterns for the proposed system and its energy efficiency adoption case Economic parameter results for the proposed system and its EE measures (Excel-based) The evaluated avoided emissions for the proposed system and its EE measures The reduction in the optimized component sizing for the new load demand, arising from an efficient switching of appliances resulted in an energy supply reduction by 37% (i.e., from 16,539 to 10,397 MWh/year). This also ultimately influenced the consumption as clearly demonstrated in Fig. 12. Regarding fuel consumption, a reduction of around 44% was noticed in favor of the energy efficiency case. The economic parameters, specifically the NPC, were drastically reduced by 88%, and the LCOE by 81% despite the associated cost implications of the energy efficiency measures. However, the greenhouse gas emissions were observed to be reduced by around 34% based on the displayed emission formula in Fig. 15 and should be seen as a result of the reduced net energy of the system available in the grid. Results of the analysis of supplementary economic benefits and emissions The supplementary economic benefits of the proposed grid-connected system compared to the base case standalone system were analyzed using Microsoft Excel and showed amazing outcomes in Table 11. It is obvious that the net of the NPC values indicating the saved amount of money in the transitioning to the proposed grid-connected system was close to $35 million. This amount, based on the annuity analysis that incorporated the discount factors, the capital recovery factor, and the project life span led to a simple payback period (PBP) of about 6 years, as well as a discounted payback period (DPBP) of about 7 years. These payback periods (i.e., DPBP and PBP) could be interpreted as the years required for securing back the total costs for the implementation of the proposed grid connected system from the saved amount of money in the systems that were switched both with and without discounting, respectively. Ultimately, a return on investment in the switch-over was estimated to be around 16%, which is close to the internal rate of return. Table 11 Economic benefit analysis of the proposed grid-connected system compared to the base case off-grid system (Excel results) Similarly, in ascertaining the benefits of adopting the energy efficiency to the proposed grid-connected system, based on the saved amount of money in such a switch-over from the grid-connected system, and being the base case in this regard, similar analysis parameters were achieved. The saved amount being close to $15 million in the switch-over led to a payback period of 1.78 years, a discounted payback period of 1.99 years, and ultimately to a return on investment as well as an internal rate of return of all approximately 56%. The impact in this scenario is even more rewarding as compared to the impact in the preceded analysis of the grid-connected system to the base case standalone system. This is due to a lesser number of years in the recovery of the total investments and a greater return. Table 12 summarized the entire results of the excel analysis in the comparison. Table 12 Economic benefit analysis of the switch from the grid-connected system to the EE-based system (Excel results) The supplementary emission analysis, which is based on the global warming potential (GWP), the acidification potential (AP), and the ozone-layer depletion potential (ODP) indicators as determinants for slightly broader environmental impacts over the entire life cycle proved to be successful. The results are depicted in Fig. 16. It is evident that a comparison of the proposed system with a grid-only power production system based on a unit kilowatt hour of electricity revealed a gap in the overall life cycle greenhouse gas emission savings with regard to the reduction of the CO2-equivalent from the grid-only power that is comprised of more fossil fuels in the mixture. The same applies to the acidic gas emissions with an acidification potential gap shown on a life cycle basis. It is also obvious that the energy efficiency measure for the proposed grid-connected system has become a reduced GWP, and AP value per unit kilowatt hour of energy production. This is due to the resizing of the system in favor of a higher wind power production share and reduced shares for the biogas and solar PV power, as compared to the optimal sizing of the grid-connected system. However, the AP gases are applied on both a direct and an indirect basis to the proposed hybrid renewable system with its EE measure, unlike the GWP where only the indirect-based emissions apply due to the carbon neutrality of the renewable systems. Regarding the ODP indicator incorporated, it favors grid-only power, although all values being infinitesimal. This is because this potential indicator applies primarily to the solar PV activities in the life cycle, while still affecting the proposed hybrid renewable system with its EE measure. The EE case impact value on the ODP category is observed to be relatively less compared to that of the proposed system prior to the EE incorporation. This is obviously due to the share of the PV energy production being reduced in the optimization process. Supplementary life cycle emission indicator results of the systems Qualitative analysis of the overall findings On extending the power system modeling task, a brief qualitative assessment based on reliability arguments to the case study country is also essential. This is based on the utility grid concerns and the policy implications of such energy solutions proposed. Firstly, the Nigerian grid-infrastructure focusing on the transmission network had a theoretical capacity of 7500 MW but can handle a wheeling capacity of 4500 MW over a distance of about 20,000 km [35, 36]. This is said to be insufficiently low and requires significant expansion and integration of the renewable systems in addressing energy deficits, environmental concerns, and so on. However, challenges regarding the utility grid consist of not only the wheeling capacity shortage but also other concerns, namely network transmission losses on an average of 7.4%, being higher than those of emerging countries' benchmarked at 2–6% [35, 36], persistent power cuts arising from the inefficiency of power evacuations, voltage control challenges, poor maintenance, and inadequate mesh networks [4, 37]. Hence, these might serve as limitations to the integration of the hybrid system at the moment despite its outstanding benefits compared to the comparable standalone hybrid system case discussed previously in the quantitative analysis. Thus, adopting the comparable off-grid system is also a good idea for the power supply of the demand site(s), until the utility grid challenges are resolved. Addressing the utility grid challenges for continuous operation stability requires strong measures such as investment on technology transfer, continuous research, adequate financing, and highly skilled manpower, which goes back to the political will of the Nigerian government. On moving to the policy aspect, which is also a strong indicator for the successful transition, it is obvious that the solution targeted both consumption and grid intervention at the domestic level, hence a bidirectional approach, which is closely associated with net metering as a policy instrument. Currently, this policy instrument does not exist in the country; however, a closer instrument to it, being the feed-in tariff exists, which was approved in 2015 and put to force in 2016, covering solar PV, wind turbines, small hydro, and biomass power [38]. Therefore, as a call, the net metering instrument is also needed for such grid integration, particularly for those venturing into the power system business as consumers at the same time for offsetting costs, efficient operation, and ensuring sustainable power supply on the grid-intervention. This should take a favorable package far beyond the conventional electricity price, for the kilowatt hour of net power provided to the grid. It should be mentioned that for the sake of this study, the purchase price was specified in the utility grid inputs as 150% of the conventional power purchase price as a minimum for a better motivation of such ventures. Again, concerning the energy efficiency assessment addressed, it showed a tremendous outcome technically and economically. Hence, such practice is also necessary and needs appropriate incentives from the government for its diffusion and sustainability. The incentives could be made effective through the launch of different programs and sensitizations while clearly specifying the packages necessary for such practices by the energy consumers at the domestic level and beyond. Additionally, full financing for the systems' venture could also be made available as a further motivation where the full payback by the energy producers could then be favored by many installments over a long-term period. Detailed assessment of a grid-connected hybrid renewable power system has been proposed and conducted with its comparison to an off-grid hybrid renewable power system for obtaining clear benefits of the successful transition. The assessment was based on techno-economic modeling and optimization, sensitivity analysis, energy efficiency assessment, further supplementary economic and life cycle emissions evaluation, as well as a wrap-up reliability argument. The optimized configuration for the proposed grid-connected system for addressing the considered load profile for the site was found at a total energy production of 16,539 MWh/year, and a total supply of 17,353 MWh/year due to the additional grid purchase of 814 MWh/year. Load consumption was estimated to be 6762 MWh/year and grid injection 8216 MWh/year. The NPC as well as the LCOE for the system were $16.67 million and $0.0788/kWh respectively. These NPC and LCOE values were observed to be roughly 68% and 67% respectively, less than those of the comparable off-grid system. This was caused by the grid impact on the proposed system, as excess energy could be sent to the grid in offsetting the overall system costs despite the need to purchase energy on occasions of deficits. This is different to the off-grid system where the additional battery increases the system costs, without possibilities of grid intervention. Another benefit was observed regarding the massive reduction in greenhouse gas emissions to the point of even eliminating emissions for the grid system at the operational level. The technical and economic parametric sensitivity analysis also revealed an impact on other parameters and the extent of such an impact on the system operation. The energy efficiency assessment with further simulation and re-optimization indicated a tremendous decrease in the optimized sizing, energy production, and economic parameters, hence an opportunity for a credible and commendable transition. The decrease in the economic parameters in the EE implementation for the grid-connected system was found to be as high as by 88% and 81% for the NPC and LCOE, respectively. Nevertheless, the avoided emissions in the grid, based upon the EE assessment, were reduced due to a reduction in the excess energy of the system available in the grid. An evaluation of the further supplementary economic benefits considering the saved cash in systems switching showed impacts on different economic parameters, namely the payback period, the discounted payback period, the rate of return, and the internal rate of return. At the same time, further supplementary assessments based on the life cycle emissions impact also clearly showed a gap of the proposed system and its energy efficiency measures compared to the grid-only power. This is due to the carbon neutrality of the renewable-based system and the carbon positivity of the conventional-based systems in the grid in view of the analyzed GWP case as an example. Nevertheless, indirect emissions were accounted for all the aspects of the life cycle in line with the concerned processes from the GaBi software database. These all are obvious for the relevance of implementing a hybrid renewable energy system with grid integration on decentralized grounds. There is also a strong need for implementing the energy efficiency measures evident for achieving enormous benefits in line with a low-carbon development transition. This could be successfully fulfilled using not only a reliable utility grid and commendable policy measures' supports but also strong incentive measures, in particular, for ensuring the energy efficiency practices at the considered domestic level and beyond. Data are available upon request. AP: Acidification potential CRF: Capital recovery factor DPBP: Discounted payback period EE: Ganzleitlichen Bilanz GWP: Global warming potential HOMER: Hybrid Optimization Model for Electric Renewables IRR: Internal rate of return LCOE: Levelized cost of energy NPC: Net present cost ODP: Ozone-layer depletion potential PBP: Payback period US Department of Energy (DOE) (2001) Hybrid renewable energy systems. In: Renewable Energy Workshop, Colorado Negi S, Mathew L (2014) Hybrid renewable energy system: a review. Int J Electron Electrical Eng:8 Rakesh S, Mohanty D, Maharana T, Pareek N (2016) Designing and study standalone hybrid energy system: for technical institutes. Int J Inf Res Rev 03:6 E. Kelechi. Rural electrification infrastructure development in Nigeria using on-grid and off-grid sources. n.pub. n.d. Christoper A, Frank D (2017) Combination of PV and central receiver CSP plants for base load power generation in South Africa. J Solar Energy:10 Available at Science Direct Ileberi G, Adikankwu H, Timi E, Adenekan O (2016) Grid-integration of renewable technology: a techno-economic assessment. Am J Mech Eng 4 Numbi B, Malinga S (2017) Optimal energy costs and economic analyses of a residential grid-interactive solar PV system: case of eThekwini municipality in South Africa. J Appl Energy:18 Available at Science Direct Nadjema O, Nacer T, Hamidat A, Salhi H (2016) Optimal hybrid PV / wind energy system sizing: application of cuckoo search algorithm for Algerian dairy farms. J Renewable Sustainable Energy Rev:14 Available at Science Direct Mohammed B, Rachid E, Elhammoumi K, Khanfara M (2016) Optimal sizing of grid-connected PV-wind system: case study of agricultural farm, Morocco. J Theor Appl Inf Technol 82:11 Amos M, Sparks D, Samantha K, Moorlach M (2015) Optimization of PV-wind hybrid system under limited water resources. J Renewable Sustainable Energy Rev: Available at Science Direct:8 Silinga C, Gauche P, Rudman J, Cebecauer T (2014) Energy procedia: the South African REIPPP two-tier CSP tariff, implications for a proposed hybrid CSP peaking system: Available at Science Direct. In: International Conference on Concentrating Solar Power and Chemical Energy Systems, Solar PACES, 2014 Kazem HA, Khatib T (2013) Techno-economical assessment of grid connected photovoltaic power systems productivity in Sohar, Oman. Sustainable Energy Technol Assessments 3:5 Gonzalez A, Riba J-R, Rius A (2015) Optimal sizing of a hybrid grid-connected photovoltaic–wind–biomass power system. Sustainability 7(09):20 Salahi S, Adabi F, Mozafari SB (2016) Design and simulation of a hybrid micro-grid for Bishesh village. Int J Renewable Energy Res 6(1):13 Dali M, Belhadj J, Roboam X (2010) Hybrid solarewind system with battery storage operating in grid–connected and standalone mode: control and energy management experimental investigation. Energy 35:09 Nurunnabi M, Roy N (2015) Grid-connected hybrid power system design using HOMER. In: International Conferences on Advances in Electrical Engineering, Dhaka, Bangladesh Global Energy Statistical Year Book (2017) Nigerian electricity production. [Online]. Available: https://yearbook.enerdata.net/electricity/world-electricity-production-statistics.html. Accessed Oct 2017. Worldometers Population (2017) Countries in the world by population (2017). [Online]. Available: http://www.worldometers.info/world-population/population-by-country/. Accessed Mar 2017. Stephen IO, Egwuonwu G, Osazuwa I (2012) Delineation of all-season-recharged ground water reservoir from two valleys, Zaria, Nigeria. J Environ Hydrol 20:9 Population.city (2015) Zaria population. [Online]. Available: http://population.city/nigeria/zaria/. Accessed Nov 2017. Samuel Y (2013) Assessment of water quality of hand-gug wells in Zaria LGA of Kaduna state, Nigeria. Int J Eng Sci 2(11):04 Global Biodiversity Information Facility (GBIF) (2015) DIVA-GIS. [Online]. Available: https://www.gbif.org/tool/181420/diva-gis. Accessed 2020. National Aeronautics Space Administration (NASA) (2017) Surface meteorology and solar energy. [Online]. Available: https://eosweb.larc.nasa.gov/sse/RETScreen/. Accessed Oct 2017. Food and Agricultural Organization (FAO) (2017) Crops data. [Online]. Available: http://www.fao.org/faostat/en/#data. Simonya K, Fasina O (2013) Biomass resources and bioenergy potentials in Nigeria. Afr J Agric Res 8:15 United Nations Environmental Program (UNEP) (2013) Technologies for converting waste agricultural biomass to energy. United Nations Environmental Program (UNEP), Osaka Paul D, Nicolae F, Matei F (2014) Main factors affecting biogas production - an overview. Rom Biotechnol Lett J 19:14 Moral R, Moreno J, Perez M (2004) Characterisation of the organic matter pool in manures. J Bioresour Technolo: Available at Science Direct:6 D. Ludington (n.d.) Biogas heating value calculations. n.pub. Muyiwa A, Martin AC, Samuel PS (2014) Analysis of hybrid energy systems for application in southern Ghana. J Energy Conversion Manage: Available at Science Direct:12 Muyiwa A, Quansah D, Agelin CM, Paul SS (2017) Multipurpose renewable energy resources based hybrid energy system for remote community in Northern Ghana. J Sustainable Energy Technol Assess:10 Kusakana K, Vermark HJ (2014) Cost and performance evaluation of hydrokinetic-diesel Hybrid System. In: Energy Procedia: The 6th International Conference on Applied Energy – ICAE2014 Available at Science Direct Taher M, Ghodhbane N, Sassi BN (2016) Assessment viability for hybrid energy system (PV/wind/diesel) with storage in the northernmost city in Africa: Bizerte, Tunisia. J Renewable Sustainable Energy Rev: Available at Science Direct:14 Fraunhofer Institute for Solar Energy (FISE) (2012) Levelized cost of electricity, renewable energies. Fraunhofer ISE, Freiburg Nigerian Electricity Regulatory Commission (NERC) (2015) Amended multi year tariff order (Myto) – 2.1 for the period April 1st, 2015 to December, 2018. NERC, Abuja Nigerian Electricity Regulatory Commission (NERC) (n.d) Nigerian energy supply industry (NESI). [Online]. Available: https://www.nercng.org/. [Accessed Dec 2018]. Akpojedge FO, Onogbotsere ME, Mormah EC, Onogbotsere PE (2016) A comprehensive review of Nigeria electric power transmission issues and rural electrification challenges. Intern J Eng Trend Technpol 31:9 International Energy Agency (IEA) (2018) Policies and measures for renewable energy: Nigeria statistical timeline. [Online]. Available: http://www.iea.org. [Accessed 2018]. GreenStarNetwork (2017) Low carbon virtual private clouds. [Online]. [Accessed Oct 2017]. Sadeghi M, Gholizadeh B, Gilanipoor J, Khaliliaqdham N (2012) Economic analysis of using of wind energy, case study of Baladeh city of north Iran. Int J Agric Crop Sci:8 Sara S, Rita P, Malmquist A, Pina A (2015) Feasibility study of using a biogas engine as backup in a decentralized hybrid (PV/wind/battery) power generation system: case study of Kenya. J Energy: Available at Science Direct:12 Cat-Electric Power (2011) Gas generator set: AG biogas. n.Pub. Common Wealth of Australia (2008) Emissions estimation manual technique for combustion engines. National Population Inventory. Davis U (2012) Revised air quality and greenhouse gas emissions calculations The Shift Project (n.d) Nigerian electricity production. [Online]. Available: http://www.tsp-dataportal.org/. Accessed April 2017. This study was carried out under the project "Water and Energy Security in Africa (WESA-ITT)," with special recognitions to the support of the supervisors and other members of the project. Full funding has been secured from the German Federal Ministry of Education and Research (BMBF) via its Project Management Agency DLR. Mechanical Engineering Department, Faculty of Technology, University of Tlemcen, B.P. 119/Pôle Chetouane, 13000, Tlemcen, Algeria Ismail Abubakar Jumare Pan African University Institute of Water and Energy Sciences - PAUWES, c/o University of Tlemcen, B.P. 119/Pôle Chetouane, 13000, Tlemcen, Algeria Ismail Abubakar Jumare & Abdellatif Zerga Institute for Technology and Resources Management in the Tropics and Subtropics, TH Köln - University of Applied Sciences, Betzdorfer Strasse 2, 50679, Cologne, Germany Ramchandra Bhandari Abdellatif Zerga Conceptualization, development, and analyses: I.A.J; overall writing: I.A.J; additional analysis suggestions: R.B; review, corrections, and overall supervision: R.B and A.Z. The author(s) read and approved the final manuscript. Correspondence to Ismail Abubakar Jumare. The authors declare no competing interest. The component modeling input data Table 13 Utility grid input specifications Table 14 Input specifications for the power system components Table 15 Additional input specification for biogas generator fuel Table 16 Energy efficiency (EE) assessment input specifications The applied model for the supplementary life cycle emissions assessment in GaBi. Grid-only conventional power mix data ratio sourced from [45]. [Basis, 1 kWh functional unit/equivalent to the 3.6 MJ total reference flows in each case] Jumare, I.A., Bhandari, R. & Zerga, A. Assessment of a decentralized grid-connected photovoltaic (PV) / wind / biogas hybrid power system in northern Nigeria. Energ Sustain Soc 10, 30 (2020). https://doi.org/10.1186/s13705-020-00260-7 Received: 17 September 2019 Hybrid energy system, grid integration energy efficiency assessment
CommonCrawl
Multiple solutions to a Neumann problem with equi-diffusive reaction term DCDS-S Home Multiple solutions for a Navier boundary value problem involving the $p$--biharmonic operator August 2012, 5(4): 753-764. doi: 10.3934/dcdss.2012.5.753 Multiple solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian Antonia Chinnì 1, and Roberto Livrea 2, Department of Science for Engineering and Architecture (Mathematics Section), Engineering Faculty, University of Messina, Messina, 98166, Italy Department MECMAT, Engineering Faculty, University of Reggio Calabria, Reggio Calabria, 89100, Italy Received April 2011 Revised August 2011 Published November 2011 Using a multiple critical points theorem for locally Lipschitz continuous functionals, we establish the existence of at least three distinct solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Keywords: three-critical-points theorem., variable exponent Sobolev space, $p(x)$-Laplacian, critical points of locally Lipschitz continuous functionals, differential inclusion problem. Mathematics Subject Classification: Primary: 35J20, 35R7. Citation: Antonia Chinnì, Roberto Livrea. Multiple solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 753-764. doi: 10.3934/dcdss.2012.5.753 G. Bonanno, Some remarks on a three critical points theorem,, Nonlinear Anal., 54 (2003), 651. doi: 10.1016/S0362-546X(03)00092-0. Google Scholar G. Bonanno and P. Candito, Non-differentiable functionals and applications to elliptic problems with discontinuous nonlinearities,, J. Differential Equations, 244 (2008), 3031. doi: 10.1016/j.jde.2008.02.025. Google Scholar G. Bonanno and A. Chinnì, Discontinuous elliptic problems involving the $p(x)$-Laplacian,, Math. Nachr., 284 (2011), 639. doi: 10.1002/mana.200810232. Google Scholar G. Bonanno and A. Chinnì, Multiple solutions for elliptic problems involving the $p(x)$-Laplacian,, Le Matematiche, LXVI (2011), 105. Google Scholar G. Bonanno and S. A. Marano, On the structure of the critical set of non-differentiable functions with a weak compactness condition,, Appl. Anal., 89 (2010), 1. doi: 10.1080/00036810903397438. Google Scholar F. Cammaroto, A. Chinnì and B. Di Bella, Multiple solutions for a Neumann problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 71 (2009), 4486. doi: 10.1016/j.na.2009.03.009. Google Scholar K. C. Chang, Variational methods for nondifferentiable functionals and their applications to partial differential equations,, J. Math. Anal. Appl., 80 (1981), 102. doi: 10.1016/0022-247X(81)90095-0. Google Scholar F. H. Clarke, "Optimization and Nonsmooth Analysis," Second edition,, Classics Appl. Math., 5 (1990). Google Scholar G. Dai, Three solutions for a Neumann-type differential inclusion problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 70 (2009), 3755. doi: 10.1016/j.na.2008.07.031. Google Scholar G. Dai, Infinitely many solutions for a Neumann-type differential inclusion problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 70 (2009), 2297. doi: 10.1016/j.na.2008.03.009. Google Scholar X. Fan and S.-G. Deng, Remarks on Ricceri's variational principle and applications to the $p(x)$-Laplacian equations,, Nonlinear Anal., 67 (2007), 3064. doi: 10.1016/j.na.2006.09.060. Google Scholar X. Fan and C. Ji, Existence of infinitely many solutions for a Neumann problem involving the $p(x)$-Laplacian,, J. Math. Anal. Appl., 334 (2007), 248. doi: 10.1016/j.jmaa.2006.12.055. Google Scholar X. Fan and D. Zhao, On the spaces $L^{p(x)}(\Omega)$ and $W^{m,p(x)}(\Omega)$,, J. Math. Anal. Appl., 263 (2001), 424. doi: 10.1006/jmaa.2000.7617. Google Scholar O. Kováčik and J. Rákosník, On spaces $L^{p(x)}$ and $W^{1,p(x)}$,, Czechoslovak Math., 41 (1991), 592. Google Scholar A. Kristály, Infinitely many solutions for a differential inclusion problem in $\mathbbR^n$,, J. Differential Equations, 220 (2006), 511. doi: 10.1016/j.jde.2005.02.007. Google Scholar A. Kristály, M. Mihǎilescu and V. Rǎdulescu, Two non-trivial solutions for a non-homogeneous Neumann problem: An Orlicz-Sobolev space setting,, Proc. Royal Soc. Edinburgh Sect. A, 139 (2009), 367. doi: 10.1017/S030821050700025X. Google Scholar S. A. Marano and D. Motreanu, On a three critical points theorem for non-differentiable functions and applications to nonlinear boundary value problems,, Nonlinear Anal., 48 (2002), 37. doi: 10.1016/S0362-546X(00)00171-1. Google Scholar S. A. Marano and D. Motreanu, Infinitely many critical points of non-differentiable functions and applications to a Neumann-type problem involving the p-Laplacian,, J. Differential Equations, 182 (2002), 108. doi: 10.1006/jdeq.2001.4092. Google Scholar M. Mihǎilescu, Existence and multiplicity of solutions for a Neumann problem involving the $p(x)$-Laplace operator,, Nonlinear Analysis, 67 (2007), 1419. doi: 10.1016/j.na.2006.07.027. Google Scholar D. S. Moschetto, A quasilinear Neumann problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 71 (2009), 2739. doi: 10.1016/j.na.2009.01.109. Google Scholar D. Motreanu and N. S. Papageorgiou, On some elliptic hemivariational and variational-hemivariational inequalities,, Nonlinear Anal., 62 (2005), 757. doi: 10.1016/j.na.2005.03.101. Google Scholar N. S. Papageorgiou and E. M. Rocha, "Existence and Multiplicity of Solutions for the Noncoercive Neumann p-Laplacian,", Preceedings of the 2007 Conference on Variational and Toplogical Methods: Theory, 18 (2010), 57. Google Scholar N. S. Papageorgiou and G. Smyrlis, Multiple solutions for nonlinear Neumann problems with the p-Laplacian and a nonsmooth crossing potential,, Nonlinearity, 23 (2010), 529. doi: 10.1088/0951-7715/23/3/005. Google Scholar B. Ricceri, On a three critical points theorem,, Arch. Math. (Basel), 75 (2000), 220. Google Scholar B. Ricceri, A general variational principle and some of its applications,, J. Comput. Appl. Math., 113 (2000). doi: 10.1016/S0377-0427(99)00269-1. Google Scholar Cristian Bereanu, Petru Jebelean. Multiple critical points for a class of periodic lower semicontinuous functionals. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 47-66. doi: 10.3934/dcds.2013.33.47 Enrique R. Pujals, Federico Rodriguez Hertz. Critical points for surface diffeomorphisms. Journal of Modern Dynamics, 2007, 1 (4) : 615-648. doi: 10.3934/jmd.2007.1.615 Keith Promislow, Hang Zhang. Critical points of functionalized Lagrangians. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1231-1246. doi: 10.3934/dcds.2013.33.1231 Yanfang Peng. On elliptic systems with Sobolev critical exponent. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3357-3373. doi: 10.3934/dcds.2016.36.3357 Xudong Shang, Jihui Zhang, Yang Yang. Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent. Communications on Pure & Applied Analysis, 2014, 13 (2) : 567-584. doi: 10.3934/cpaa.2014.13.567 Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 217-230. doi: 10.3934/dcdsb.2014.19.217 M. Nakamura, Tohru Ozawa. The Cauchy problem for nonlinear wave equations in the Sobolev space of critical order. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 215-231. doi: 10.3934/dcds.1999.5.215 Wenmin Gong, Guangcun Lu. On Dirac equation with a potential and critical Sobolev exponent. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2231-2263. doi: 10.3934/cpaa.2015.14.2231 Gioconda Moscariello, Antonia Passarelli di Napoli, Carlo Sbordone. Planar ACL-homeomorphisms : Critical points of their components. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1391-1397. doi: 10.3934/cpaa.2010.9.1391 P. Candito, S. A. Marano, D. Motreanu. Critical points for a class of nondifferentiable functions and applications. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 175-194. doi: 10.3934/dcds.2005.13.175 Jaime Arango, Adriana Gómez. Critical points of solutions to elliptic problems in planar domains. Communications on Pure & Applied Analysis, 2011, 10 (1) : 327-338. doi: 10.3934/cpaa.2011.10.327 Marc Briane. Isotropic realizability of electric fields around critical points. Discrete & Continuous Dynamical Systems - B, 2014, 19 (2) : 353-372. doi: 10.3934/dcdsb.2014.19.353 M. L. Miotto. Multiple solutions for elliptic problem in $\mathbb{R}^N$ with critical Sobolev exponent and weight function. Communications on Pure & Applied Analysis, 2010, 9 (1) : 233-248. doi: 10.3934/cpaa.2010.9.233 Futoshi Takahashi. An eigenvalue problem related to blowing-up solutions for a semilinear elliptic equation with the critical Sobolev exponent. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 907-922. doi: 10.3934/dcdss.2011.4.907 Yavdat Il'yasov. On critical exponent for an elliptic equation with non-Lipschitz nonlinearity. Conference Publications, 2011, 2011 (Special) : 698-706. doi: 10.3934/proc.2011.2011.698 Antonio Capella. Solutions of a pure critical exponent problem involving the half-laplacian in annular-shaped domains. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1645-1662. doi: 10.3934/cpaa.2011.10.1645 Anouar Bahrouni, VicenŢiu D. RĂdulescu. On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 379-389. doi: 10.3934/dcdss.2018021 Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012 Masahiro Ikeda, Takahisa Inui, Mamoru Okamoto, Yuta Wakasugi. $ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1967-2008. doi: 10.3934/cpaa.2019090 Li Ma. Blow-up for semilinear parabolic equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1103-1110. doi: 10.3934/cpaa.2013.12.1103 Antonia Chinnì Roberto Livrea
CommonCrawl
Tuition ClassesExpand Secondary LevelExpand A Math Tuition IP Math Tuition E Math Tuition Pre-University LevelExpand H2 Math Tuition Free ResourcesExpand O Level (Sec/ IP)Expand Free Online Mini-Course A Level (H2 Math)Expand Essential Topical Questions Free Vectors Mini-Course H2 Math Question Bank O Level | IGCSE | Additional Mathematics The Complete Study Guide | Video explanations | Downloadable Worksheets Surd is a number that cannot be expressed as a fraction, or whose fractional representation has an integer and a fractional part. The most famous surd is the square root of $2$, which is approximately $1.4142135623730950488016887242096980785696$… We also use surds to express irrational numbers accurately as we cannot write them in decimal form – the decimals of irrational numbers recur or do not stop. Surds are often represented by square roots $(\sqrt{\square })$, cube roots $(\sqrt[3]{\square })$ and other root symbols. Laws of Surds To solve problems involving surds, we must first understand the two rules of surds. $\sqrt{ab}=\sqrt{a}\times \sqrt{b}$ $\sqrt{\frac{a}{b}}=\frac{\sqrt{a}}{\sqrt{b}}$ Simplify each of the following surds without using a calculator. $\sqrt{40}$ (i) $\sqrt{40}$ (ii) $\sqrt{18}$ $\sqrt{396}$ (iii) $\sqrt{396}$ $40=4\times 10$ $\begin{aligned} \sqrt{40}&=\sqrt{4\times 10} \\ & =\sqrt{4}\times \sqrt{10} \\ & =2\sqrt{10} \end{aligned}$ $18=9\times 2$ \sqrt{18}&=\sqrt{9\times 2} \\ & =\sqrt{9}\sqrt{2} \\ & =3\sqrt{2} \sqrt{396}&=\sqrt{4\times 9\times 11} \\ & =\sqrt{4}\sqrt{9}\sqrt{11} \\ & =2\times 3\sqrt{11} \\ Addition and Subtraction of Surds To perform addition or subtraction on surds, we must make sure that the numbers within the square roots are identical. For instance, & 3\sqrt{2}+5\sqrt{2} \\ & 11\sqrt{3}-4\sqrt{3} \\ $\sqrt{75}+\sqrt{108}$ (i) $\sqrt{75}+\sqrt{108}$ $\sqrt{32}+\sqrt{50}$ (ii) $\sqrt{32}+\sqrt{50}$ $\left( 3+5\sqrt{2} \right)\left( 4-\sqrt{2} \right)$ (iii) $\left( 3+5\sqrt{2} \right)\left( 4-\sqrt{2} \right)$ $\begin{aligned} &\sqrt{75}+\sqrt{108}\\ & =\sqrt{25\times 3}+\sqrt{9\times 4\times 3} \\ & =\sqrt{25}\sqrt{3}+\sqrt{9}\sqrt{4}\sqrt{3} \\ & =5\sqrt{3}+3\cdot 2\sqrt{3} \\ & =5\sqrt{3}+6\sqrt{3} \\ & =11\sqrt{3} \\ & \sqrt{32}+\sqrt{50} \\ & =\sqrt{16\cdot 2}+\sqrt{25\cdot 2} \\ & =\sqrt{16}\sqrt{2}+\sqrt{25}\sqrt{2} \\ & \left( 3+5\sqrt{2} \right)\left( 4-\sqrt{2} \right) \\ & =12-3\sqrt{2}+20\sqrt{2}+5\sqrt{2}\left( -\sqrt{2} \right) \\ & =12+17\sqrt{2}-5\left( 2 \right) \\ & =12-10+17\sqrt{2} \\ & =2+17\sqrt{2} \\ Conjugate Surds Conjugate surds, also known as complementary surds, are formed when we change the sign between the two terms. Broadly speaking, conjugate surds refer to the sum and difference of two simple quadratic surds – e.g. $5+\sqrt{3}$ and $5-\sqrt{3}$. What happens when we multiply the surd and its conjugate together? We will get a rational number. $\left( 5+\sqrt{3} \right)\left( 5-\sqrt{3} \right)={{5}^{2}}-3=22$ $\frac{2}{\sqrt{5}-3}$ (i) $\frac{2}{\sqrt{5}-3}$ $\frac{10}{\sqrt{2}-5}$ (ii) $\frac{10}{\sqrt{2}-5}$ $\frac{8}{3\sqrt{7}-1}$ (iii) $\frac{8}{3\sqrt{7}-1}$ & \frac{2}{\sqrt{5}-3}\cdot \frac{\left( \sqrt{5}+3 \right)}{\left( \sqrt{5}+3 \right)} \\ & =\frac{2\left( \sqrt{5}+3 \right)}{{{\left( \sqrt{5} \right)}^{2}}-{{\left( 3 \right)}^{2}}} \\ & =\frac{2\sqrt{5}+6}{5-9} \\ & =\frac{2\sqrt{5}+6}{-4} \\ & =-\left[ \frac{2\sqrt{5}}{4}+\frac{6}{4} \right] \\ & =-\left( \frac{1}{2}\sqrt{5}+\frac{3}{2} \right) \\ & \frac{10}{\sqrt{2}-5}\cdot \frac{\left( \sqrt{2}+5 \right)}{\left( \sqrt{2}+5 \right)} \\ & =\frac{10\sqrt{2}+10\cdot 5}{{{\left( \sqrt{2} \right)}^{2}}+{{\left( 5 \right)}^{2}}} \\ & =\frac{10\sqrt{2}+50}{2-25} \\ & =\frac{10\sqrt{2}+50}{-23} \\ & =-\frac{10\sqrt{2}+50}{23} \\ & \frac{8}{3\sqrt{7}-1}\cdot \frac{\left( 3\sqrt{7}+1 \right)}{\left( 3\sqrt{7}+1 \right)} \\ & =\frac{24\sqrt{7}+8}{{{\left( 3\sqrt{7} \right)}^{2}}-{{\left( 1 \right)}^{2}}} \\ & =\frac{24\sqrt{7}+8}{63-1} \\ & =\frac{24\sqrt{7}+8}{62} \\ & =\frac{24}{62}\sqrt{7}+\frac{8}{62} \\ Equation involving Surds In this section, we will learn how to find the solution of equations involving surds. It is given that $a$ and $b$ are positive integers such that $\left( a\sqrt{5}-1 \right)\left( \sqrt{5}+b \right)=20\sqrt{5}+32$. Form a pair of simultaneous equations and solve them to find the value of $a$ and of $b$. \left( a\sqrt{5}-1 \right)\left( \sqrt{5}+b \right)&=20\sqrt{5}+32 \\ a\sqrt{5}\sqrt{5}+ab\sqrt{5}-\sqrt{5}-b&=20\sqrt{5}+32 \\ \left( 5a-b \right)+\sqrt{5}\left( ab-1 \right)&=32+20\sqrt{5} By comparing, $5a-b=32---(2)$, ab-1&=20 \\ ab&=21 \\ a&=\frac{21}{b}---(1) Substitute $(1)$ into $(2)$: 5\left( \frac{21}{b} \right)-b&=32 \\ \frac{105}{b}-b&=32 \\ 105-{{b}^{2}}&=32b \\ {{b}^{2}}+32b-105&=0 \\ \left( b-3 \right)\left( b+35 \right)&=0 $\therefore b=3$ or $-35$ When $b=3,a=\frac{21}{3}=7$ When $b=-35,a=-\frac{35}{3}$ (rej, as $a$ and $b$ are positive) $\therefore a=7,b=3$ Solve $\sqrt{7x-5}-x-1=0$. \sqrt{7x-5}&=x+1 \\ {{\left( \sqrt{7x-5} \right)}^{2}}&={{\left( x+1 \right)}^{2}} \\ 7x-5&={{x}^{2}}+2x+1 \\ 0&={{x}^{2}}-5x+6 \\ 0&=\left( x-3 \right)\left( x-2 \right)\\ x&=2 \text{ or } 3\\\end{aligned}$ When $x=2$, \text{LHS}&=\sqrt{7\left( 2 \right)-5}-2-1 \\ & =0 \\ & =\text{RHS} $\therefore x=2$ or $3$ Applications of Surds One interesting consequence of using surds in order to eliminate errors is that the use of the surds implies the use of infinitely small quantities. In physics, surds are often employed to make sure that important calculations are precise. Surds are also critical in engineering, for instance, in constructing bridges, which need to be structurally sturdy. A rectangular block has a square base. The length of each side of the base is $\left( \sqrt{5}-\sqrt{3} \right)$m and the volume of the block is $\left( 4\sqrt{3}-2\sqrt{5} \right)$ m3. Find, without the use of a calculator, the height of the block in the form of $a\sqrt{3}+b\sqrt{5}$. Let height be $h$, Volume$={{\left( \sqrt{5}-\sqrt{3} \right)}^{2}}h$ 4\sqrt{3}-2\sqrt{5}&={{\left( \sqrt{5}-\sqrt{3} \right)}^{2}}h \\ h&=\frac{4\sqrt{3}-2\sqrt{5}}{{{\left( \sqrt{5}-\sqrt{3} \right)}^{2}}} \\ & =\frac{4\sqrt{3}-2\sqrt{5}}{{{\left( \sqrt{5} \right)}^{2}}-2\sqrt{5}\sqrt{3}+{{\left( \sqrt{3} \right)}^{2}}} \\ & =\frac{4\sqrt{3}-2\sqrt{5}}{5-2\sqrt{5\times 3}+3} \\ & =\frac{2\left( 2\sqrt{3}-\sqrt{5} \right)}{8-2\sqrt{15}} \\ & =\frac{2\left( 2\sqrt{3}-\sqrt{5} \right)}{2\left( 4-\sqrt{15} \right)} \\ & =\frac{2\left( 2\sqrt{3}-\sqrt{5} \right)}{2\left( 4-\sqrt{15} \right)}\cdot \frac{\left( 4+\sqrt{15} \right)}{\left( 4+\sqrt{15} \right)} \\ & =\frac{8\sqrt{3}+2\sqrt{3}\sqrt{15}-4\sqrt{5}-\sqrt{5}\sqrt{15}}{{{\left( 4 \right)}^{2}}-{{\sqrt{15}}^{2}}} \\ & =\frac{8\sqrt{3}+2\sqrt{3}\sqrt{3}\sqrt{5}-4\sqrt{5}-\sqrt{5}\sqrt{3}\sqrt{5}}{16-15} \\ & =8\sqrt{3}+2\left( 3 \right)\sqrt{5}-4\sqrt{5}-5\sqrt{3} \\ & =8\sqrt{3}+6\sqrt{5}-4\sqrt{5}-5\sqrt{3} \\ & =3\sqrt{3}+2\sqrt{5} A cylinder has a radius of $\left( \sqrt{2}-1 \right)$cm and a volume of $\left( 12+4\sqrt{2} \right)\pi$ cm3. Find, without using a calculator, the exact value of its height, $h$, in the form $\frac{a+b\sqrt{2}}{c}$cm, where $a$, $b$ and $c$ are integers. Volume of cylinder$=\pi {{r}^{2}}h$ Volume$=\left( 12+4\sqrt{2} \right)\pi $ \pi {{\left( \sqrt{2}-1 \right)}^{2}}h&=\left( 12+4\sqrt{2} \right)\pi \\ {{\left( \sqrt{2}-1 \right)}^{2}}h&=12+4\sqrt{2} \\ h&=\frac{12+4\sqrt{2}}{{{\left( \sqrt{2}-1 \right)}^{2}}} \\ & =\frac{4\left( 3+\sqrt{2} \right)}{{{\sqrt{2}}^{2}}-2\left( \sqrt{2} \right)+1} \\ & =\frac{4\left( 3+\sqrt{2} \right)}{\left( 3-2\sqrt{2} \right)} \\ & =\frac{4\left( 3+\sqrt{2} \right)}{\left( 3-2\sqrt{2} \right)}\cdot \frac{\left( 3+2\sqrt{2} \right)}{\left( 3+2\sqrt{2} \right)} \\ & =\frac{4\left( 9+6\sqrt{2}+3\sqrt{2}+2\sqrt{2}\sqrt{2} \right)}{{{\left( 3 \right)}^{2}}-{{\left( 2\sqrt{2} \right)}^{2}}} \\ & =\frac{4\left( 9+9\sqrt{2}+4 \right)}{9-4\left( 2 \right)} \\ & =\frac{4\left( 13+9\sqrt{2} \right)}{1} \\ & =\frac{52+36\sqrt{2}}{1} \\ & =52+36\sqrt{2}\,\text{cm} Do sign up for our Free Mini-Courses and try our well-structured curriculum to see how it can help to maximize your learning in mathematics online. Secondary School Additional Mathematics Free Online Course Sign Up. You may also find the pricing and plans for our Additional Mathematics unlimited all-access courses here! Geometry and Trigonometry Sketching of Trigonometric Curves Techniques and Applications of Differentiation Techniques of Integration Applications of Integration Whatsapp (+65)-87488161 [email protected] Quality is our commitment Meeting learning needs is our mission Inspiring through teaching is our passion © 2022 Tim Gan Math Learning Centre | MOE Registered | Privacy Policy | Terms and Conditions | Sitemap Tim Gan Math Hey there! How can we help you today? Start Chat with: Tuition Classes Pre-University Level O Level (Sec/ IP) A Level (H2 Math)
CommonCrawl
Imaging junctions of waveguides The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation Jianli Xiang 1, and Guozheng Yan 1,, School of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, China * Corresponding author: Guozheng Yan Fund Project: The first author is supported by the Innovative Funding Project from Central China Normal University (Grant No. 2019CXZZ078). The second author is supported by the National Natural Science Foundation of China (Grant No. 11571132) This paper considers the inverse elastic wave scattering by a bounded penetrable or impenetrable scatterer. We propose a novel technique to show that the elastic obstacle can be uniquely determined by its far-field pattern associated with all incident plane waves at a fixed frequency. In the first part of this paper, we establish the mixed reciprocity relation between the far-field pattern corresponding to special point sources and the scattered field corresponding to plane waves, and the mixed reciprocity relation is the key point to show the uniqueness results. In the second part, besides the mixed reciprocity relation, a priori estimates of solution to the transmission problem with boundary data in $ [L^{\mathrm{q}}(\partial\Omega)]^{3} $ ($ 1<\mathrm{q}<2 $) is deeply investigated by the integral equation method and also we have constructed a well-posed modified static interior transmission problem on a small domain to obtain the uniqueness result. Keywords: elastic wave, uniqueness, mixed reciprocity relation, priori estimate, static interior transmission problem. Mathematics Subject Classification: Primary: 31B10, 35J25, 74B05, 74J25; Secondary: 45Q05. Citation: Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, doi: 10.3934/ipi.2021004 A. Adams and J. F. Fournier, Sobolev Spaces, 2$^{nd}$ edition, Elsevier, Singapore, 2003. Google Scholar J. F. Ahner and G. C. Hsiao, A Neumann series representation for solutions to boundary value problems in dynamic elasticity, Quart. Appl. Math., 33 (1975/76), 73-80. doi: 10.1090/qam/449124. Google Scholar J. F. Ahner and G. C. Hsiao, On the two-dimensional exterior boundary-value problems of elasticity, Siam J. Appl. Math., 31 (1976), 677-685. doi: 10.1137/0131060. Google Scholar K. A. Anagnostopoulos and A. Charalambopoulos, The linear sampling method for the transmission problem in 2D anisotropic elasticity, Inverse Problems, 22 (2006), 553-577. doi: 10.1088/0266-5611/22/2/011. Google Scholar T. Arens, Linear sampling methods for 2D inverse elastic wave scattering, Inverse Problems, 17 (2001), 1445-1464. doi: 10.1088/0266-5611/17/5/314. Google Scholar A. Charalambopoulos, On the interior transmission problem in nondissipative, inhomogeneous, anisotropic elasticity, J. Elasticity, 67 (2002), 149-170. doi: 10.1023/A:1023958030304. Google Scholar A. Charalambopoulos and K. A. Anagnostopoulos, On the spectrum of the interior transmission problem in isotropic elasticity, J. Elasticity, 90 (2008), 295-313. doi: 10.1007/s10659-007-9146-9. Google Scholar A. Charalambopoulos, A. Kirsch, K. A. Anagnostopoulos, D. Gintides and K. Kiriaki, The factorization method in inverse elastic scattering from penetrable bodies, Inverse Problems, 23 (2006), 27-51. doi: 10.1088/0266-5611/23/1/002. Google Scholar D. Colton and R. Kress, Integral Equation Methods in Scattering Theory, Wiley, New York, 1983. Google Scholar D. Colton and R. Kress, Using fundamental solutions in inverse scattering, Inverse Problems, 22 (2006), 49-66. doi: 10.1088/0266-5611/22/3/R01. Google Scholar D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, 4$^{th}$ edition, Springer Nature Switzerland AG, 2019. doi: 10.1007/978-3-030-30351-8. Google Scholar D. Colton and B. D. Sleeman, Uniqueness theorems for the inverse problem of acoustic scattering, IMA J. Appl. Math., 31 (1983), 253-259. doi: 10.1093/imamat/31.3.253. Google Scholar H. A. Diao, H. Y. Liu and L. Wang, On generalized Holmgren's principle to the Lamé operator with applications to inverse elastic problems, Calculus of Variations and Partial Differential Equations, in press, 59 (2020), Paper No. 179, 50 pp. doi: 10.1007/s00526-020-01830-5. Google Scholar J. Elschner and M. Yamamoto, Uniqueness in inverse elastic scattering with finitely many incident waves, Inverse Problems, 26 (2010), 045005, 8pp. doi: 10.1088/0266-5611/26/4/045005. Google Scholar T. Gerlach and R. Kress, Uniqueness in inverse obstacle scattering with conductive boundary condition, Inverse Problems, 12 (1996), 619-625. doi: 10.1088/0266-5611/12/5/006. Google Scholar D. Gintides, Local uniqueness for the inverse scattering problem in acoustics via the Faber-Krahn inequality, Inverse Problems, 21 (2005), 1195-1205. doi: 10.1088/0266-5611/21/4/001. Google Scholar D. Gintides and L. Midrinos, Inverse scattering problem for a rigid scatterer or a cavity in elastodynamics, Zamm. J. Appl. Math. Mech., 91 (2011), 276-287. doi: 10.1002/zamm.201000098. Google Scholar D. Gintides and M. Sini, Identification of obstacles using only the scatteres P-waves or the scattered S-waves, Inverse Probl. Imaging, 6 (2012), 39-55. doi: 10.3934/ipi.2012.6.39. Google Scholar D. Gintides, M. Sini and N. T. Thành, Detection of point-like scatterers using one type of scattered elastic waves, J. Comput. Appl. Math., 236 (2012), 2137-2145. doi: 10.1016/j.cam.2011.09.036. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, 2$^{nd}$ edition, Springer, New York, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar P. Hähner and G. C. Hsiao, Uniqueness theorems in inverse obstacle scattering of elastic waves, Inverse Problems, 9 (1993), 525-534. doi: 10.1088/0266-5611/9/5/002. Google Scholar G. H. Hu, A. Kirsch and M. Sini, Some inverse problems arising from elastic scattering by rigid obstacles, Inverse Problems, 29 (2013), 015009, 21pp. doi: 10.1088/0266-5611/29/1/015009. Google Scholar G. H. Hu, J. Z. Li and H. Y. Liu, Recovering complex elastic scatterers by a single far-field pattern, J. Differential Equations, 257 (2014), 469-489. doi: 10.1016/j.jde.2014.04.007. Google Scholar M. Kar and M. Sini, On the inverse elastic scattering by interfaces using one type of scattered waves, J. Elasticity, 118 (2015), 15-38. doi: 10.1007/s10659-014-9474-5. Google Scholar A. Kirsch and R. Kress, Uniqueness in inverse obstacle scattering, Inverse Problems, 9 (1993), 285-299. doi: 10.1088/0266-5611/9/2/009. Google Scholar R. Kress, Uniqueness and numerical methods in inverse obstacle scattering, J. Physics: Conference Series, 73 (2007), 012003. doi: 10.1088/1742-6596/73/1/012003. Google Scholar V. D. Kupradze, Potential Methods in the Theory of Elasticity, Jerusalem: Israeli Program for Scientific Translations, 1965. Google Scholar V. D. Kupradze, Three-dimensional Problems of the Mathematical Theory of Elasticity and Thermoelasticity, Amsterdam: North-Holland, 1979. Google Scholar J. J. Lai, H. Y. Liu, J. N. Xiao and Y. F. Xu, The decoupling of elastic waves from a weak formulation perspective, East Asian Journal on Applied Mathematics, 9 (2019), 241-251. doi: 10.4208/eajam.080818.121018. Google Scholar P. D. Lax and R. S. Phillips, Scattering Theory, Pure and Applied Mathematics, Vol. 26 Academic Press, New York-London, 1967. Google Scholar H. Y. Liu and J. N. Xiao, Decoupling elastic waves and its applications, J. Differential Equations, 263 (2017), 4442-4480. doi: 10.1016/j.jde.2017.05.022. Google Scholar H. Y. Liu and J. Zou, Uniqueness in an inverse acoustic obstacle scattering problem for both sound-hard and sound-soft polyhedral scatterers, Inverse Problems, 22 (2006), 515-524. doi: 10.1088/0266-5611/22/2/008. Google Scholar X. D. Liu and B. Zhang, Direct and inverse obstacle scattering problems in a piecewise homogeneous medium, SIAM J. Appl. Math., 70 (2010), 3105-3120. doi: 10.1137/090777578. Google Scholar X. D. Liu and B. Zhang, Inverse scattering by an inhomogeneous penetrable obstacle in a piecewise homogeneous medium, Acta Math. Sci., 32B (2012), 1281-1297. doi: 10.1016/S0252-9602(12)60099-X. Google Scholar X. D. Liu, B. Zhang and G. H. Hu, Uniqueness in the inverse scattering problem in a piecewise homogeneous medium, Inverse Problems, 26 (2010), 015002, 14pp. doi: 10.1088/0266-5611/26/1/015002. Google Scholar P. A. Martin, On the scattering of elastic waves by an elastic inclusion in two dimensions, Quar. J. Mech. Appl. Math., 43 (1990), 275-291. doi: 10.1093/qjmam/43.3.275. Google Scholar R. Potthast, A point source method for inverse acoustic and electromagnetic obstacle scattering problems, IMA J. Appl. Math., 61 (1998), 119-140. doi: 10.1093/imamat/61.2.119. Google Scholar R. Potthast, On the convergence of a new Newton-type method in inverse scattering, Inverse Problems, 17 (2001), 1419-1434. doi: 10.1088/0266-5611/17/5/312. Google Scholar F. L. Qu, J. Q. Yang and B. Zhang, Recovering an elastic obstacle containing embedded objects by the acoustic far-field measurements, Inverse Problems, 34 (2018), 015002, 8pp. doi: 10.1088/1361-6420/aa9c26. Google Scholar A. G. Ramm, New method for proving uniqueness theorems for obstacle inverse scattering problems, Appl. Math. Lett., 6 (1993), 19-21. doi: 10.1016/0893-9659(93)90071-T. Google Scholar A. G. Ramm, Research anouncement uniqueness theorems for inverse obstacle scattering problems in Lipschitz domains, Appl. Anal., 59 (1995), 337-383. doi: 10.1080/00036819508840411. Google Scholar L. Rondi, E. Sincich and M. Sini, Stable determination of a rigid scatterer in elastodynamics, arXiv: 2007.06864v1. Google Scholar P. Stefanov and G. Uhlmann, Local uniqueness for the fixed energy fixed angle inverse problem in obstacle scattering, Proc. Amer. Math. Soc, 132 (2004), 1351-1354. doi: 10.1090/S0002-9939-03-07363-5. Google Scholar J. Q. Yang, B. Zhang and H. W. Zhang, Uniqueness in inverse acoustic and electromagnetic scattering by penetrable obstacles with embedded objects, J. Differential Equations, 265 (2018), 6352-6383. doi: 10.1016/j.jde.2018.07.033. Google Scholar D. Y. Zhang and Y. K. Guo, Uniqueness results on phaseless inverse acoustic scattering with a reference ball, Inverse Problems, 34 (2018), 085002, 12pp. doi: 10.1088/1361-6420/aac53c. Google Scholar Figure 1. Possible choice of $ x^{*} $ Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020340 Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020050 Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033 Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095 Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329 Xiaofeng Ren, David Shoup. The impact of the domain boundary on an inhibitory system: Interior discs and boundary half discs. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3957-3979. doi: 10.3934/dcds.2020048 Gabrielle Nornberg, Delia Schiera, Boyan Sirakov. A priori estimates and multiplicity for systems of elliptic PDE with natural gradient growth. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3857-3881. doi: 10.3934/dcds.2020128 Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 Lars Grüne, Roberto Guglielmi. On the relation between turnpike properties and dissipativity for continuous time linear quadratic optimal control problems. Mathematical Control & Related Fields, 2021, 11 (1) : 169-188. doi: 10.3934/mcrf.2020032 Jia Cai, Guanglong Xu, Zhensheng Hu. Sketch-based image retrieval via CAT loss with elastic net regularization. Mathematical Foundations of Computing, 2020, 3 (4) : 219-227. doi: 10.3934/mfc.2020013 Md. Masum Murshed, Kouta Futai, Masato Kimura, Hirofumi Notsu. Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1063-1078. doi: 10.3934/dcdss.2020230 Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283 Jianli Xiang Guozheng Yan
CommonCrawl
International Journal of Health Geographics Cross-border spatial accessibility of health care in the North-East Department of Haiti Dominique Mathon1, Philippe Apparicio ORCID: orcid.org/0000-0001-6466-93421 & Ugo Lachapelle2 International Journal of Health Geographics volume 17, Article number: 36 (2018) Cite this article The geographical accessibility of health services is an important issue especially in developing countries and even more for those sharing a border as for Haiti and the Dominican Republic. During the last 2 decades, numerous studies have explored the potential spatial access to health services within a whole country or metropolitan area. However, the impacts of the border on the access to health resources between two countries have been less explored. The aim of this paper is to measure the impact of the border on the accessibility to health services for Haitian people living close to the Haitian-Dominican border. To do this, the widely employed enhanced two-step floating catchment area (E2SFCA) method is applied. Four scenarios simulate different levels of openness of the border. Statistical analysis are conducted to assess the differences and variation in the E2SFCA results. A linear regression model is also used to predict the accessibility to health care services according to the mentioned scenarios. The results show that the health professional-to-population accessibility ratio is higher for the Haitian side when the border is open than when it is closed, suggesting an important border impact on Haitians' access to health care resources. On the other hand, when the border is closed, the potential accessibility for health services is higher for the Dominicans. The openness of the border has a great impact on the spatial accessibility to health care for the population living next to the border and those living nearby a road network in good conditions. Those findings therefore point to the need for effective and efficient trans-border cooperation between health authorities and health facilities. Future research is necessary to explore the determinants of cross-border health care and offers an insight on the spatial revealed access which could lead to a better understanding of the patients' behavior. The geographical accessibility of health services is an important issue in public health and for improved health outcomes, especially in developing countries [1,2,3,4,5,6]. During the last two decades, numerous studies have explored the potential spatial access to health services within a whole country or metropolitan area [7,8,9,10]. Scholars have also analyzed cross-border mobility for health care in several diverse contexts [11,12,13,14,15,16,17,18,19,20,21,22,23,24]. But fewer studies address the impact of an international border and its openness on the spatial access to health care resources [25, 26]. The concept of borders has been evolving throughout the years from their being seen as barriers to their being considered as contact zones, but regional integration and border openness have been questioned in several contexts [27,28,29,30,31,32]. Studies analyzing cross-border mobility for the use of health care services emphasize the uniqueness of the different border contexts and the importance of the direction of flows [16, 24]. Cross-border mobility for health care access may be explained by a variety of factors. It depends on the various individuals' situations and needs. It may be motivated by dissatisfaction with health care provision in the home country or by actual deficiencies there. A lack of coverage (in terms of health care insurance) or a quest for specialized health care may influence individual choice. Glinos et al. [12] indicate that, during the decision-making process, patients balance factors such as proximity, family support and social ties. Affordability, availability and quality of care are also determinants. Analyzing patient mobility between Laos and Thailand, Bochaton [14, 15] demonstrates the importance of well-established mobility practices as well as social networks among border populations in the seeking of cross-border health care. Social networks are also considered by Dione [16] as one of the determinants of patients' cross-border mobility in four African countries sharing a border. Proximity (physical accessibility) is also one of the main determinants of patient mobility in the very different contexts of European [11] and African countries [16]. Access to health care is multidimensional, and most of the studies on patients' cross-border mobility for health care access have used the seminal framework developed by Penchansky and Thomas [33]. These authors consider five dimensions in order to measure "the degree of fit between the clients and the system" [33]. Two of these dimensions are spatial: (1) availability (adequacy between the supply and the demand); and (2) accessibility, or the location of the supply relative to the location of the clients. The other three are aspatial and reflect socioeconomic and cultural factors: (1) accommodation, or the adequate matching of the supply organization with the clients' abilities and perceptions; (2) affordability, or the prices of the services relative to the clients' income or ability to pay; and (3) acceptability (clients' and providers' attitudes toward one another). These dimensions may act as either facilitators or barriers. Regarding spatial accessibility, scholars define this in terms of the possible use of the services (potential accessibility) and their actual use (realized accessibility) [34, 35]. This differentiation between potential access and realized access makes it possible to better identify the barriers to or facilitators of access. The extent of the spatial separation between supply and demand can therefore be analyzed. In this article, we focus on potential spatial accessibility in a borderland context. The border acts either as a geographical constraint or as a facilitator. Our hypothesis is that accessibility varies depending on the level of border openness. In addition, the lack of services (push factor) in Haiti and the more attractive supply (pull factor) in the Dominican Republic may lead to polarized flows in a push/pull dynamic. The aim of this paper is to evaluate the spatial accessibility of health care services for Haitians living along the Haitian-Dominican border, and to measure the impact of this border on their health care access using the well-known E2SFCA method. The Haitian-Dominican border The Haitian-Dominican border inherited from the colonial period has given rise to a "double insularity" [36, 37] that has been settled through a long process of social and spatial differentiation as well as ideological distancing [36,37,38]. Both countries have forged and asserted their particular national identities through their respective histories and struggles to achieve the construction of their own nation state [37, 39]. The discontinuities (territorial, cultural, socioeconomic and political) are therefore quite visible at the Haitian-Dominican border [40, 41]. An entire apparatus (gates, military control on the Dominican side, etc.) is in place to mark and create this distance [41,42,43]. At the same time, the relative and recent border opening has given rise to a transitioning process which is redefining the function of the border as moving toward a "space of coexistence and cooperation" while sustaining asymmetrical and conflicting interactions along the border line [40, 42, 44, 45]. Officially (since 1987), the border has been opened during the day and closed at night. There are four official entry points and several informal crossing points, the number of which is not precisely known [38]. These informal crossing points underscore the permeability of the border as well as the complexity of the cross-border mobility [43]. The flow of the population may be constrained by different conflictual situations: a national decision (epidemiological surveillance, control of smuggling, etc.) or a particular local situation (protest about Dominican soldiers' aggressive behaviours, protest over national decisions, protests from Haitian or Dominican traders, etc.) [44]. From 2000 to 2016, the border was closed a number of times for varying numbers of hours or days. But the intensity and importance of the commercial exchanges for both countries, at different levels, may act as a leverage for conflict settlement. Cross-border movements from both sides have existed since the colonial period, but Haitian labour flows started in the early twentieth century with the North American occupation of both countries [38, 43, 46,47,48]. Various mechanisms are in place in the Dominican Republic to regulate such flows (illegality of the Haitian work force, massive deportations, etc.) [38, 48]. According to the recent survey on migration, more than 80% of immigrants in the Dominican Republic are Haitian [49, 50]. The importance of Haitian labour for the construction industry as well as for the agricultural sector is well documented [38, 43, 46, 47, 49,50,51]. Some studies [38, 46] have revealed a "feminization of Haitian migration flows." Others [42] have emphasized the difficulty of distinguishing between irregular migration, smuggling and trafficking. On the other hand, there is some evidence that the percentage of Haitian immigrants using health care facilities is higher than for other immigrants [49]. Furthermore, during the last two decades, the Dominican Republic has been used to channel international aid to Haiti. Montiel et al. [46] emphasize the differential impact of this, including, for example, the reinforcement of the Dominican health care system at the expense of the Haitian one. They consider this to be a factor that could have encouraged a growing number of Haitians to cross the border in search of health care [46]. Their comments are in line with evidence from other studies addressing cross-border health care mobility in different contexts [12, 15, 16]. But beyond the significant and quite systematic health outcome disparities between both countries (Table 1), what are the differences between the two health care systems? Table 1 Basic health indicators for Haiti and the Dominican Republic Main characteristics of the public health care systems in Haiti and the Dominican Republic The health care system in most Latin American and Caribbean countries is segmented, with a variety of financing structures and affiliation types. It is also fragmented, with a supply offered by many institutions (public and private) and facilities that are not well integrated into the health care network [52]. This fragmentation and segmentation exacerbate inequities in access [52], which is also the case in Haiti [53] and the Dominican Republic [53, 54]. Reforms of the health care system: Access to health care and equity During the last two decades, both countries—like most Latin American [53, 54] and Caribbean countries [55]—have been involved in an ongoing process of reforming their health care sector. These reforms are intended to improve health outcomes and to reduce health inequities. They are based on the following principles: a regulatory role for public health institutions, multisectoral production of health care, universal access, equity and solidarity, and efficiency and efficacy of the health care system [56,57,58,59]. Changes have been made in the structure and organization of the public health care system in both countries in order to improve access to health care and especially to primary care. Nevertheless, the pace and the implementation of such reforms have fluctuated from one side of the border to the other [55]. In the Dominican Republic, the reform has been the starting point for universal access to health care [54]. Catchment areas have been defined to maximize resource allocation for primary care as well as for equity. Citizens must be assigned to or registered in a Primary Care Unit (Unidad de Atención Primaria). But the coverage is still deficient (less than 50% of the population was covered in 2012), with disparities found among different socioeconomic groups (the poorest have limited access to health care) and also between rural and urban areas [54]. In Haiti, changes have also been made to improve coherence with administrative boundaries and respect for the equity and universality principles included in the health reform [60]. But the Haitian health system still faces complex organizational and institutional challenges [55]. Moreover, data from the Enquête Mortalité, Morbidité et Utilisation des Services (EMMUSV) highlight the lack of coverage: less than 5% of the respondents [61] have health care insurance. As for the Dominican Republic, wealthier and urban people have more access, which means that any form of equity is still largely incomplete [61, 62]. Organization of the health care system Both countries have a three-tiered health care system [56, 57, 60, 63, 64], but with some specific differences, as shown in Fig. 1. The pyramidal model is organized according to three levels of complexity: primary, secondary and tertiary. It is designed to break away from the existing hospital-centred structure in order to improve the population's access to primary care. The reference and counter-reference system allows patients to transit within the system from the entry point to specialized services when required. The three-tiered health care system of Haiti and the Dominican Republic The primary level consists of outpatient services and community care. The first level therefore offers basic health care (minimum service package) and prevention and promotion activities. One of the main organizational differences between the Haitian and Dominican health care systems is found at this level. In Haiti, the primary level is subdivided into three parts. It includes different kinds of facilities located in distinct territorial entities: (1) Health community centre located in the Section communale (the smallest territorial division) and offering ambulatory care and prevention and promotion activities; (2) Health centre in the Commune delivering preventive and curative care, including normal childbirth; and (3) Community Reference Hospital (HCR) in the Arrondissement providing a range of care including sensitive interventions requiring specialists in internal medicine, surgery, pediatrics, obstetrics and gynecology. However, the official documents are somewhat confusing, as two of them [57, 65] consider only two subdivisions and others [60] mention three. Either way, the subdivision appears to be the Haitian health authorities' response in order to accommodate the prevailing in terms of primary care facilities and to carry out the transition process toward the mainstream pyramidal model [60, 65]. It is important to emphasize that, in the Dominican Republic, each citizen is assigned a Primary Care Unit near their home (Unidad de Atención Primaria—UNAP) regardless of their insurance system [59], which is not the case in Haiti. Moreover, these units offer the same range of services as the first two Haitian first-level subdivisions. The facilities of the second level (General Hospital in the Dominican Republic, whether administered at the municipal or provincial level, and Departmental Hospital in Haiti) offer basic specialized care in both countries. The services offered by the third level cover all contingencies during hospitalization and attend to the most complex cases. Binational cooperation in health The Haitian health master plan (2012–2022) considers reinforcing coordination with the Dominican Republic in order to reduce health issues in the epidemiological field in the borderland regions. It also seeks to develop relevant strategies and partnerships in the management of infectious diseases. There is a binational agreement for the control of tuberculosis aimed at successful coordination of the actions undertaken in the borderland regions and mainly targeting migrants, the populations of the bateyes (settlements around sugar mills where Haitian migrant workers live in very precarious conditions) and of the industrial areas, as well as those living in the borderland regions. In the case of natural disasters (floods in 2004 and the earthquake in 2010), the Dominican health facilities have supported the Haitian population by offering medical services to those needing them [44, 66, 67]. There are also coordinated vaccination campaigns in the borderland regions. But, as far as the official documents of both countries indicate, there is no cross-border cooperation in health care involving any hospitals or other facilities. The area studied is in the northern part of the island of Haiti/Quisqueya, and focuses on the region along the Haitian-Dominican border, with one official daytime entry point (Ouanaminthe-Dajabón) and several informal crossing points (Fig. 2). Evidence for the last three decades has shown a significant growth in the intensity and diversity of interactions along the border, especially at the entry point, that is, Ouanaminthe-Dajabón [68, 69], the border's second leading entry point. The cities of Ouanaminthe and Dajabón have played an important role throughout the history of both countries [70]. They have also witnessed violent conflicts such as the massacre of thousands of Haitians in October 1937. This borderland is evolving nowadays from a barrier to a contact zone and an interdependent zone [41, 69, 70]. Several stakeholders (international organizations, the transnational capital, merchants, grassroots organizations, etc.) are engaged in this process [70]. The relocation of a private Dominican industrial free-trade zone in the fertile plain of Maribahoux in Ouanaminthe (a project financed by the International Finance Corporation) however highlights the advantages derived by the Dominican Republic from its different level of development from that of Haiti. It shows how such disparities are helping to widen the gaps and are fostering more asymmetrical interactions [41, 70, 71]. Furthermore, the recent proliferation of binational projects promoted and financed by international organizations is tending to set the framework for a new era of cross-border cooperation [41] in different fields, including health issues. The level of poverty is globally higher in the borderland regions of both countries [72, 73], but there are still important disparities in terms of infrastructures, services, etc. between the North-East Department and the Province of Dajabón. As shown in Fig. 2, the Haitian side of the border is denser, with more and larger sized cities. In the health sector, this intense mobility has forced the implementation of binational mechanisms for epidemiological surveillance. Gaps in the supply of social services (health, education) tend to lead to asymmetrical interactions and polarized flows in a push/pull dynamic [68, 69, 74]. On the other hand, few studies [49, 75] have indicated that the ratio of foreigners using health care facilities is higher in the Dominican borderland region compared with the rest of the country. Statistics from the Dominican public health secretary show that, in 2015, almost 10% of public hospital patients (consultation and emergency) in the Province of Dajabón were foreigners. The rate is even higher for the primary care centres (35%) (Table 2). Information about foreign patients' nationality is not available. The high percentage of Haitian migrants (87.2% of the immigrant population in Dominican Republic was born in Haiti according to the Second National Immigrant Survey held on 2017) and proximity to the border suggest that most of those foreign patients are Haitian, but there is no direct evidence for this. The condensed version of the Second National Immigrant Survey (ENI-2017) indicates that 77% of the migrants born in Haiti as well as 78% of those born in Dominican Republic of foreigners parents used the public health services [50], Moreover, hundreds of thousands of Haitian descendants [39] are not considered to be Dominicans because of the 2013 judgment TC/0168/13 of the Dominican Constitutional Court and the 169-14 Law [76]. It is thus difficult to estimate the percentage of patients crossing the border to obtain health care and the proportion of Haitians living in the Dominican Republic. Table 2 Dominican health care facilities use for consultation and Emergency by national and foreign patients, 2015. Three types of GIS data are needed to assess the potential accessibility of health care services. For the supply side: The geographic locations of public health facilities in each country have been collected from the websites of the health secretaries of Haiti and the Dominican Republic. Data on the number of health professionals for each health facility have been provided by the Department of Information of the Dominican Republic's public health ministry (Departamento de Información de salud). For Haiti, such data were available on the health map on the website of the public health ministry (Ministère de la santé publique et de la population). According to those respective sources, there is a total of 70 public health facilities (35 on each side) and 932 health professionals (322 on the Haitian side and 610 in the Dominican Republic) (Fig. 3). It is worth noticing on the Haitian side (North-East Department), there is only one public facility of the second level and none at the border city of Ouanaminthe. Meanwhile the Province of Dajabon counts with four public facilities of the second level (municipal hospitals or general hospital) and one of them located in the border city of Dajabón. Health care facilities and health professionals in the studied area For the demand side: The demographic data have been extracted from the censuses at the equivalent of the census block level (Section d'énumération—SDE) for Haiti (N = 422) and at the neighbourhood level (Barrio) for the Dominican Republic (N = 202). The neighbourhood was the finest spatial unit available. The average population is 868 for the SDE and 317 for the Barrio. The demographic data for Haiti and the Dominican Republic were provided by the national statistical institutes (Institut Haïtien de Statistiques et d'Informatique—IHSI and Oficina Nacional de Estadística—ONE, respectively). Because the last census in Haiti was held in 2003, we had to estimate the population for 2010 (the year of the Dominican census). Our estimates are based on those made by the IHSI for 2009, in applying their population growth rate. The use of a centroid considers that the population is evenly distributed within the spatial unit used (SDE or Barrio), which is not the case, especially for a scattered rural population area. To better reflect the reality of the settlements in the rural areas, we use an adjusted centroid of the spatial unit. The adjustments are based on photo interpretations of Google and Bing imagery. For the travel distance: The road network data were retrieved from Open Street Map (OSM) for both countries. Data were also provided by Haiti's National Centre of Geospatial Information (Centre National d'Information Géospatiale – CNIGS). The data were validated using Google and Bing imagery. The road classification of the Haitian and Dominican transport secretaries was used. A maximum travel speed was assigned to each class of road as indicated in Table 3 based on various sources and photointerpretation to assess roads conditions. For pathways, the maximum travel speed is 3 km/h in order to in some way reflect the geographic constraints, since the central part of the area studied is nested in a mountainous chain. The entry points were georeferenced based on aerial photo interpretation. Table 3 Road classification and speed To measure the impact of the opening of the border on the spatial access to health care, we consider different scenarios with varying border crossing time impedance: open, semi-open, or closed. These scenarios are hypothetical, since the level of control is not the same along the border or at the informal crossing points or entry points. Furthermore, different factors (objective and subjective) influence the smoothness of the flows of Haitians at the Dominican border. The estimates of the time spent crossing are based upon: (1) on-site observations in July 2016 and June 2017 at the Ouanaminthe-Dajabón entry point; and (2) informative discussions with key resources in Ouanaminthe and organization members working along the border line. The first scenario is an open border, where there is less control (or almost none) at the Dominican border. The border is open on Friday and Monday when the so-called "binational" market takes place in Dajabón. Haitians are "free" to cross, and no papers are needed. But, due to the intense flows, delays could be observed. A 15-min cost is thus added to the travel time required to cross the border in order to take into account light traffic or migration controls. The second and third scenarios consider the border half closed. In this case, there is more control at the Dominican border. It is a twofold situation: a) a normal border control for migration and light traffic (scenario 2); and b) stricter control and heavy traffic (scenario 3). The cost varies from 30 min for scenario 2–60 min for scenario 3. In the fourth and last scenario, the border is closed. No crossing is permitted. This is, for example, the case during the night or in some other particular contexts such as conflicts, elections, etc. For all four scenarios, we consider only one direction flow: from Haiti to the Dominican Republic. This choice is based on the hypothesis that, due to the disparities between both countries, a push/pull dynamic polarizes cross-border flows toward the Dominican Republic. The Enhanced Two-Step Floating Catchment Area (E2SFCA) method The potential spatial accessibility as described earlier is the distance between the supply (in this case, the number of health professionals) and the demand, defined by the overall population. Numerous studies have demonstrated the importance of the distance (metres or travel time) to access health care in developing countries [1, 2, 4]. Geographic constraints as well as road conditions can trigger low access to health care and impact the use of health care facilities, with important repercussions for health outcomes and public health. Several methods are used to measure spatial accessibility [77, 78]. The approach based on available supply assumes that all users within the same catchment area have equal access regardless of the geographic constraints [9, 77]. The gravity model and its derived two-step floating catchment area (2SFCA) method consider spatial interactions and the mobility of the population [35]. The well-known two-step floating catchment area method computes the ratio between the supply (number of physicians or health professionals) and the demand (population) within a catchment area for each supply point at first and ultimately for each demand point [79, 80]. To overcome the limitations of the 2SFCA, an enhanced method has been developed by Luo and Qi [10] by applying weights to differentiate travel time zones in accounting for distance decay. This method is used to evaluate the cross-border potential spatial accessibility of the health care services. Since the area studied includes rural areas, the catchment area (within a 60-min driving, motorbiking and walking time) has been divided into four travel time zones, as proposed by some authors [79, 81]: 0–15, 15–30, 30–45 and 45–90 min. The 45–90 min travel zone considers the 60-min cost for a semi-open border with stricter control, as indicated above. The maximum travel speed for each class of road accounts for the assumed mixed transportation mode (walking combined with motorbiking, the most usual transportation mode in the studied area). The method is implemented in two steps, using the equations below. The first step assigns an initial ratio to each health service within the catchment area. In the second step, for each demand location within the catchment area, we search all supply locations and then sum up the initial ratio Rj at these locations. The resulting Ak represents the accessibility of the population at location k, Rj the supply-to-population ratio at the health service (supply) location j that falls within the catchment area, and dkj the distance (min) between k and j. The same distance weights derived from the Gaussian function used in step 1 are applied to different travel time zones to account for distance decay. A larger value implies better accessibility. $$R_{j} = \frac{{S_{j} }}{{\sum\limits_{{k \in \left\{ {d_{kj} \in D_{r} } \right\}}} {P_{k} W_{kj} } }} = \frac{{S_{j} }}{{\sum\limits_{{k \in \left\{ {d_{kj} \in d_{1} } \right\}}} {P_{k} W_{1} } + \sum\limits_{{k \in \left\{ {d_{kj} \in d_{2} } \right\}}} {P_{k} W_{2} } + \sum\limits_{{k \in \left\{ {d_{kj} \in d_{3} } \right\}}} {P_{k} W_{3} } + \sum\limits_{{k \in \left\{ {d_{kj} \in d_{4} } \right\}}} {P_{k} W_{4} } }}$$ $$A_{k} = \sum\limits_{{k \in \left\{ {d_{kj} \in D_{r} } \right\}}} {R_{j} } = \sum\limits_{{k \in \left\{ {d_{kj} \in d_{1} } \right\}}} {R_{j} W_{1} } + \sum\limits_{{k \in \left\{ {d_{kj} \in d_{2} } \right\}}} {R_{j} W_{2} } + \sum\limits_{{k \in \left\{ {d_{kj} \in d3} \right\}}} {R_{j} W_{3} } + \sum\limits_{{k \in \left\{ {d_{kj} \in d_{4} } \right\}}} {R_{j} W_{4} }$$ where Sj represents the weight given to service S such as its size (i.e. number of health professionals) ("supply side"), dkj is the distance (travel time) between spatial unit centroid k and health service j, d0 is the threshold travel time (min), Pk represents the demand at location k that falls within catchment area j and W1, W2, W3, W4 = 1.00, 0.80, 0.55, 0.15 with a slow step-decay function or 1.00, 0.60, 0.25, 0.05 with a fast step-decay function. The calculations are done using two kinds of software (ArcGIS and SAS). The cost-distance matrix obtained using the Network Analyst extension in ArcGIS has been exported to SAS to compute the E2SFCA. The final results are mapped in ArcGIS. Statistical analysis was conducted to explore the differences and variation for the E2SFCA calculations. The Wilcoxon test was computed to assess the differences and variation observed in the E2SFCA results for each scenario and country. Finally, linear regression models were used to predict the accessibility of health services (E2SFCA) according to the four scenarios and their variation. All statistical analyses were carried out using SAS software. As mentioned before, four simulations are considered to measure the impact of the opening of the border on the level of accessibility of public health services for the borderland population of the North-East Department and the Province of Dajabón. To facilitate the comparison between the four scenarios, a quantile classification with five classes has been used and mapped (Fig. 4). Following are the results for each scenario. E2SFCA results Scenario 1: Open border The first scenario is with an open border. A penalty of 15 min is added to the travel time of Haitians crossing the border. The results show contrasting levels of accessibility in the North-East Department between areas next to the border and more remote locations (Fig. 4a). Two features stand out. First, a large area located mostly in the commune of Ouanaminthe has the highest ratio of accessibility. A smooth gradation is observed to the west (along the national road connecting this region with the North Department and its capital, Cap-Haïtien, the second most important city in Haiti), and to the northwest toward Fort-Liberté (the North-East Department's capital). Second, a sharp drop in the level of accessibility is seen between those two regions (respectively [P60 to P80[and [P80 to Max], the last two quintiles) and the other remote locations (corresponding to the first quintile, [Min to P20[). The areas with the highest level of accessibility are those where hospitals with a larger number of health professionals are located. They are also better connected to a road network in good condition, with higher maximum speeds. The pattern in the Dominican Republic is quite different: the municipalities at the edge of the Province have the highest level of accessibility, and those next to the border have moderate to low access. There are scattered areas with a very low level of access to health care. Dajabón, the main city of the Province, has a moderate level of accessibility with an open border because of its proximity to Ouanaminthe, a city with a population of 60,000. Therefore, an open border induces potential overload of the Dominican health care services due to an increased demand from Haitians and consequently lowers the health professional-to-population accessibility ratio for the Dominicans. But the overall situation in terms of accessibility in the Dominican Republic remains better than in Haiti, even with an open border. Scenario 2 and scenario 3: Half-closed border Scenario 2 is with a half-closed border, with a 30-min cost to cross the border, and the scenario 3 is with a 60-min cost. The map indicates some changes in the pattern compared with the open border (Fig. 4b). First, there is a small drop in the extent of the area with the highest accessibility on the Haitian border side. Second, on the Dominican side, the level of accessibility is globally higher than that observed in the first scenario because of a decrease in the potential demand from Haitians at the Dominican sites. Scenario 3 is a half-closed border, with a 60-min cost added for crossing the border, indicating more control on the Dominican border (Fig. 4c). The results show a significant reduction in the extent of the area with a higher level of accessibility on the Haitian side of the border. On the Dominican side of the border, there is a noticeable improvement in the overall level of accessibility in the Province of Dajabón. The 60-min cost added causes a significant decrease in the Haitians' potential demand at the Dominican sites which is limited to the 15 min travel zone. Therefore, the Dominicans accessibility level increases beyond the 30–45 min travel zones. The results also emphasize the impact of the low road coverage especially on the Haitian population's access to health resources. Scenario 4: Closed border With the border closed, the results show the potential spatial accessibility of health care facilities in each country (Fig. 4d). Globally, the level of accessibility is higher in the Dominican Republic than in Haiti. In fact, the quasi-totality of the Haitian spatial units belongs to the first two quintiles (light gray), while those of the Dominican Republic belong to the last two quintiles (dark gray), drawing attention to the existing disparities between both countries in terms of potential accessibility to health care. This scenario also confirms the striking gaps within the North-East Department, especially between the remote locations and the urban areas. Variation between scenario 4 and scenario 1 Figure 5 shows the variation in the level of spatial accessibility between scenario 4 (closed border) and scenario 1 (open border). It highlights the areas most affected by the border's level of openness. As shown in Fig. 5, the solid blue areas are those that benefit from an open border. The red ones are those gaining better access when the border is closed. The border has almost no impact on an extended territory (pale yellow) of the North-East Department where the variation differences are negative but close to zero. Variations in E2SFCA Results for Scenario 4 versus Scenario 1 In both countries, the areas next to the border are those that are more sensitive to the impact of the border on their level of spatial accessibility. Those areas are the ones where an open border induces an increased demand from Haitians at the Dominican health services located near the border (within the 15–45 min travel zone). It is also important to note the importance of the road network in the border effect, as the pattern is aligned with the main road network. For example, borderland areas (southern part of the North-East Department in Haiti) covered with pathways and with geographic constraints don't benefit at the same level as those with a good road network coverage. A similar sensibility pattern is observed in the Province of Dajabón. Results of nonparametric test and regression models To explore differences (location and scale) and variation in the E2SFCA results for each country, we conduct a nonparametric test (Wilcoxon test). Figure 6 shows that, scenarios 2 (mean rank = 283 for Haiti vs 375 for Dominican Republic, z = 6.01, p < 0.0001) to 4 (mean rank = 220 for Haiti, 505 vs Dominican Republic, z = 1.52, p < 0.0001), as well as for the variation (scenario 4–scenario 1) (mean rank = 212 for Haiti vs 522 for Dominican Republic, z = 20.26, p < 0.0001), the results are significant (p < 0.0001), but that is not the case for scenario 1 (mean rank = 320 for Haiti vs 296 for Dominican Republic, z = − 0.17, p = 0.117). It is relevant to note: a) the dispersion of the scores for Haiti compared to those for the Dominican Republic; and b) the gap in mean rank between Haiti and the Dominican Republic for scenario 3 (border half-closed) and scenario 4 (closed border). The variability and dispersion in the range for Haiti emphasize the disparities within the North-East Department shown in Fig. 4. The results for the variation between an open border and a closed border confirm the impact of the border on the level of spatial accessibility of health care for the Haitian population. Boxplots for the four scenarios Finally, several linear regression models are conducted to predict the accessibility of health services (E2SFCA results) according to the four scenarios and variation between the two extremes. Two independent variables are introduced in these models: Haiti (D.R. is defined as the reference category), and rural area (versus urban area). The results of these models are shown in Table 4. Table 4 Linear regression for E2SFCA (n = 624) First, note that R2 increases from 0.15 to 0.89 for scenarios 1–4. Next, the degree of border openness has a significant impact on accessibility on both sides of the border, to the detriment of Haiti (with increasingly strong negative regression coefficients). Not surprisingly, the coefficients for rural areas confirm that these areas have poorer accessibility, regardless of the scenario. In addition, the positive and significant coefficient for the variation between scenarios 4 and 3 shows that the closure of the border strongly affects accessibility in urban centres that are close to the border. The E2SFCA results and statistical analyses clearly highlight the impact of the border on the potential spatial accessibility of public health services for Haitian and Dominican border populations with a peculiar pattern caused by the one directional movement assumed for the model. In fact, the simulations carried out show that Haitian populations in areas close to the border line—particularly near an entry point (formal or informal)—and served by a road network in good condition have higher levels of accessibility when the border is open (scenario 1) or semi-closed (scenario 2), with a 30-min penalty. At the same time, an increased demand from Haitians of those specific areas for the Dominican health services lowers the health professional to population accessibility ratio in Dominican Republic causing striking variations according to the openness of the border. It is therefore interesting to note that, by increasing the cost from 30 to 60 min, the level of accessibility varies widely across the border. Thus, the opening of the border only impacts spatial accessibility for the Haitian population in the vicinity (travel time zones 0–15 min and 15–30 min). These results are not surprising, as these areas have a road network in good condition, confirming the importance of a good road network [4, 82,83,84] and of the type of distance [78] in potential spatial accessibility. As a result, rural areas are those with the lowest level of accessibility, on the one hand, and, on the other hand, these areas benefit very little from the opening of the border, despite its proximity. A weak road network (absence of roads or roads in poor condition) and topographical constraints associated with a limited offer of services (type of service and number of health professionals) indeed characterize Haitian rural areas. In Dominican Republic, an open border besides creating as mentioned before a decrease in the level of accessibility generates more disparities within the Province of Dajabón, especially for the population at its edges. Introducing a 30 or a 60-min cost for a semi-closed border smoothens the gaps within Dajabón since the Haitians' demand at the Dominican health services decrease. Scenario 4 highlights the differences in the potential spatial accessibility of health services between the two countries. These differences clearly underline the health and spatial discontinuities due to the border. The disparities in the spatial accessibility of public health services are very low (or almost non-existent) within the Dominican territory, in striking contrast with Haiti, where they are high. Those gaps can lead to a one-directional flow like the one assumed by the model. Furthermore, several empirical studies [16, 85, 86] in different border contexts indicate a pattern of polarized flows because of an unsatisfied demand in one side and a more attracted one on the other side. Nevertheless, this push/pull dynamic could have considerable impact on the health services of the recipient country depending on their public health care capacity, the volume of cross-border patients and the borderland context including the level of cooperation or integration of the countries involved. It is worth noticing that the challenges for both countries regarding those issues are high even more when considering the results of the potential spatial accessibility model. However, an optimization of the E2SFCA to weight the population according to the real use of health services on both sides of the border would have given a closer insight into the reality of potential spatial accessibility. It would also have been appropriate to assess the impact of the border on potential spatial accessibility by integrating socioeconomic and demographic factors to analyze the correlation between population characteristics and cross-border spatial accessibility. The results also call for better cooperation and integration of the two countries' health care systems. In this regard, the stakes for Haiti and the Dominican Republic are high, not only because of the instability of relations between the two countries, but also because of the thorny issue of migration. As Alexandre [48] points out, cross-border movements between Haiti and the Dominican Republic, including movements linked to health, cannot be thought of without considering a reform of the migration legislation in both countries. The results emphasize the impact of a good road network on the spatial accessibility of health care, as discussed in many studies. They also show the impact of the openness of an international border on the potential accessibility of health care in borderland regions, highlighting the importance of distance. Proximity is thus seen as one of the determinants in cross-border mobility and in health care seeking behavior. But other factors such as the attractiveness (quality, cost) of health care services must be considered to analyze individuals' behaviors. In our research, we also assume that all the Haitian population of the North-East Department would potentially choose to cross the border, but this is not actually the case. An optimization of the model would make it possible to better evaluate the impact of the border and to obtain more robust results, with a better appreciation of the reality of the situation. A gender-oriented analysis could also have been of interest considering, inter alia, the high maternal mortality rate in Haiti and the high number of unassisted deliveries, particularly in the rural Haitian areas. The study also highlights the need for more research so as to better understand the determinants of cross-border health care use. Moreover, the distance thresholds are arbitrary and do not necessarily reflect specific patients' behavior, suggesting the need for qualitative inquiry to assess the therapeutic. In-depth interviews and surveys could therefore offer an insight into revealed spatial access and lead to a better understanding of patients' behavior and how this is related to their practices around the border. Furthermore, cross-border movements in health are part of bigger issues. They should be addressed not only in shrinking the gaps in health access resources but also in creating the needed legal and institutional environment for them to develop smoothly. Perry B, Gesler W. Physical access to primary health care in Andean Bolivia. Soc Sci Med. 2000;50(9):1177–88. Rushton G. Use of location-allocation models for improving the geographical accessibility of rural services in developing countries. Int Reg Sci Rev. 1984;9(3):217–40. Tanser F, Gijsbertsen B, Herbst K. Modelling and understanding primary health care accessibility and utilization in rural South Africa: an exploration using a geographical information system. Soc Sci Med. 2006;63(3):691–705. Rosero-Bixby L. Spatial access to health care in Costa Rica and its equity: a GIS-based study. Soc Sci Med. 2004;58(7):1271–84. Barnes-Josiah D, Myntti C, Augustin A. The "three delays" as a framework for examining maternal mortality in Haiti. Soc Sci Med. 1998;46(8):981–93. Schoeps A, Gabrysch S, Niamba L, Sié A, Becher H. The effect of distance to health-care facilities on childhood mortality in rural Burkina Faso. Am J Epidemiol. 2011;173(5):492–8. Luo W, Wang F. Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. Environ Plan. 2003;30(6):865–84. Pan J, Liu H, Wang X, Xie H, Delamater PL. Assessing the spatial accessibility of hospital care in Sichuan Province, China. Geospatial Health. 2015;10(2):261–70. Luo W. Using a GIS-based floating catchment method to assess areas with shortage of physicians. Health Place. 2004;10(1):1–11. Luo W, Qi Y. An enhanced two-step floating catchment area (E2SFCA) method for measuring spatial accessibility to primary care physicians. Health Place. 2009;15(4):1100–7. Glinos I, Baeten R: A literature review of cross-border patient mobility in the European Union. In: Observatoire social européen, Europe for patients; 2006. p. 115. Glinos I, Baeten R, Helble M, Maarse H. A typology of cross-border patient mobility. Health Place. 2010;16(6):1145–55. Glinos IA, Doering N, Maarse H. Travelling home for treatment and EU patients' rights to care abroad: results of a survey among German students at Maastricht University. Health Policy. 2012;105(1):38–45. Bochaton A. Cross-border mobility and social networks: Laotians seeking medical treatment along the Thai border. Soc Sci Med. 2015;124:364–73. Bochaton A: La construction de l'espace transfrontalier lao-thaïlandais. Une analyse à travers le recours aux soins. Espace populations sociétés Space populations societies. 2011;(2011/2):337–351. Dione I: Polarisation des structures de soins de la Haute Casamance: entre construction nationale des systèmes de santé et recours aux soins transfrontalier. Université d'Angers; 2013. Brown HS. Do Mexican immigrants substitute health care in Mexico for health insurance in the United States? The role of distance. Soc Sci Med. 2008;67(12):2036–42. Grossman D, Garcia SG, Kingston J, Schweikert S. Mexican Women Seeking Safe Abortion Services in San Diego, California. Health Care Women Int. 2012;33(11):1060–9. Guendelman S. Health care users residing on the Mexican border what factors determine choice of the U.S. or Mexican Health System? Medical Care. 1991;29(5):419–29. Guendelman S, Jasis M. Giving birth across the border: the San Diego-Tijuana connection. Soc Sci Med. 1992;34(4):419–25. Horton S, Cole S. Medical returns: seeking health care in Mexico. Soc Sci Med. 2011;72(11):1846–52. Laugesen MJ, Vargas-Bustamante A. A patient mobility framework that travels: European and United States-Mexican comparisons. Health Policy. 2010;97(2–3):225–31. Su D, Richardson C, Wen M, Pagán JA. Cross-Border Utilization of Health Care: evidence from a Population-Based Study in South Texas. Health Serv Res. 2011;46(3):859–76. Peiter PC: Condiciones de vida, situación de salud y disponibilidad de servicios de salud en la frontera de Brasil: un enfoque geográfico. Cád Saúde Pública; 2007. De Ruffray S, Hamez G. L'accessibilité transfrontalière aux maternités: Enjeux territoriaux d'une coopération sanitaire dans la Grande Région. In: Moullé F, Duhamel S, editors. Frontières et santé: Genèses et maillages des réseaux transfrontaliers. Paris: L'Harmattan; 2010. Perez S, Balli A: L'accessibilité aux soins dans l'espace frontalier des Alpes du Sud. In: Frontières et santé: genèses et maillages des réseaux transfrontaliers. Paris: L'Harmattan; 2010. Arbaret-Schulz C, Beyer A, Piermay J-L, Reitel B, Selimanovski C, Sohn C, Zander P: La frontière, un objet spatial en mutation. EspacesTemps net. 2004; 29(04). Piermay J-L, Reitel B, Zander P: Introduction. In: Reitel B, editor. Villes et frontières, vol. Collections: Collection Villes (Paris, France). Paris: Paris: Anthropos: Economica; 2002. p. 2–9. Herzog LA. The transfrontier organization of space along the US-Mexico border. Geoforum. 1991;22(3):255–69. Herzog LA, Sohn C. The cross-border metropolis in a global age: a conceptual model and empirical evidence from the US–Mexico and European border regions. Glob Soc. 2014;28(4):441–61. Anderson J, O'Dowd L. Borders, border regions and territoriality: contradictory meanings, changing significance. Reg Stud. 1999;33(7):593–604. Paasi A. Borders and border-crossings. In: Johnson NC, Schein RH, Winders J, editors. Cultural geography. Chichester: Wiley; 2013. p. 478–93. Penchansky R, Thomas JW. The concept of access. Definition and relationship to consumer satisfaction. Med Care. 1981;19:127–40. Wang F. Quantitative methods and applications in GIS. London: Taylor & Francis Group; 2006. Guagliardo MF. Spatial accessibility of primary care: concepts, methods and challenges. Int J Health Geogr. 2004;3:3. Théodat J-M. Haïti-République Dominicaine: une île pour deux, 1804–1916. Paris: Éditions Karthala; 2003. Théodat J-M, Mathon D, Mathelier R, Casséus M. Quisqueya: un papillon d'envol. In: Mathelier R, Mathon D, Casséus M, editors. Entreprise, Territoire et Développement: Compilation 2002–2003. Port-au-Prince: INESA/Le Nouvelliste; 2003. Wooding B, Mosely-Williams R, Flores C. Les immigrants haïtiens et leurs descendants en République Dominicaine. Haïti: Institut catholique pour les relations internationles, ISPOS; 2005. Silié R. Haïti et la République dominicaine, pays en conflit ou en construction d'une nouvelle amitié? Conjonction La revue franco-haïtienne de l'Institut Français d'Haïti. 2014;2226:98–110. Dilla Alfonso H, Alexis S, Antoine MI, Carmona C, de Jesús Cedano S, Murray GF, Espejo JEN, O'neil DJ, Rapilly M, Sánchez N: La frontera dominico-haitiana: Grupo de Estudios Multidisciplinarios Ciudades y Fronteras; 2010. Redon M. Frontière poreuse, État faible: les relations Haïti/République dominicaine à l'aune de la frontière. Bulletin de l'Association de géographes français. 2010;87:308–23. Petrozziello AJ, Wooding B: Fanm nan Fwontyè, Fanm toupatou: Éclairage sur la violence exercée sur les Immigrantes d'origine haïtienne, celles en transit migratoire et sur les déplacés internes le long de la frontière Dominicano-Haïtienne. Santo Domingo: Colectiva Mujer y Salud, Mujeres del Mundo, Observatoire sur la migration et la Caraïbe; 2011. Jolivet V. Les Haïtiens à Santo Domingo: une masse invisible? Bulletin de l'Association de géographes français. 2010;87:324–35. Wooding B. Women fight for their safety in the Dominican-Haitian border. Migr Dev. 2012;10(18):37–58. Murray GF: Sources of Conflict along and across the Haitian–Dominican border. In: Fwontyè nou—Nuestra Frontera. Santo Domingo Dominican Republic: Pan American Development Foundation; 2010. Montiel Armas I, Canales Cerón AI, Vargas Becerra PN: Migración y salud en zonas fronterizas: Haití y la República Dominicana: CEPAL; 2010. Ministerio del Trabajo, Observatorio del Mercado Laboral Dominicano: Inmigrantes Haitianos y Mercado Laboral, Estudio Sobre los Trabajadores de la Construcción y de la Producción del Guineo en la República Dominicana. In: República Dominicana: Ministerio del Trabajo; 2011. Alexandre G. Vers une gestion ordonnée de la migration entre la République dominicaine et Haïti. Conjonction La revue franco-haïtienne de l'Institut Français d'Haïti. Les realtions Haïti—République dominicaine. 2014;226:132–56. Oficina Nacional de Estadística: Primera Encuesta Nacional de Inmigrantes en la República Dominicana (ENI-2012). In: Santo Domingo, República Dominicana: Oficina Nacional de Estadística; 2013. p. 345. Oficina Nacional de Estadistica: Segunda Encuesta Nacional de Inmigrantes en la República Dominicana—ENI-2017—Version resumida del informe general. In: Santo Domingo: Oficina Nacional de Estadistica; 2018. Silié R, Segura C, Dore Cabral C. La nueva inmigración haitiana. Santo Domingo: Flacso; 2002. Organización Panamericana de la Salud: Haití. In: Salud OPdl, editor. Salud en las Américas, Edición de 2012: Volumen de países. Washington: Organización Panamericana de la Salud; 2012. Organización Panamericana de la Salud: República Dominicana In. vol. Salud en las Américas. Edición 2012: Volumen de países. Washington: Organización Panamericana de la Salud; 2012. Lavigne M, Vargas LH: Sistemas de protección social en América Latina y el Caribe: República Dominicana. In: (CEPAL) CEpALyeC, editor. Documento de Proyecto. . Santiago de Chile: Comisión Económica para América Latina y el Caribe (CEPAL); 2013. p. 40. Cercone JA. Análisis de situación y estado de los sistemas de salud de países del Caribe, vol. 185. Santiago de Chile: United Nations Publications; 2007. Secretaría de Estado de Salud Pública y Asistencia Social: Modelos de Red de los Servicios Regionales de Salud. In: Social SdEdSPyA, editor. 1a edición edn. Santo Domingo: Secretaría de Estado de Salud Pública y Asistencia Social; 2005. p. 199. Ministère de la santé publique et de la population: Politique Nationale de Santé. In: Edited by population Mdlspedl. Port-au-Prince; 2012. Bitrán R: Reformas recientes en el sector salud en Centroamérica, vol. 177: United Nations Publications; 2006. Ministerio de Salud Pública: Modelo de atención en salud en el sistema nacional de salud de la República Dominicana. In: (DDEI) DdDEI, editor. vol. 3. Santo Domingo: Ministerio de Salud Pública; 2012. Institut Haïtien de l'Enfance, ICF International: Évaluation de Prestation des Services de Soins de Santé, Haïti, 2013. In: Rockville Maryland: Ministère de la Santé Publique et de la Population (MSPP); 2014. Cayemittes M, Busangu MF, Bizimana JdD, Barrère B, Sévère B, Cayemittes V, Charles E: Enquête Mortalité, Morbidité et Utilisation des Services, Haïti, 2012. In: Haiti: MSPP, IHE et ICF International; 2013. Lamaute-Brisson N: Sistemas de protección social en América Latina y el Caribe: Haití. In: (CEPAL) CEpALyeC, editor. Documento de Proyecto. Santiago de Chile: Comisión Económica para América Latina y el Caribe (CEPAL); 2013. p. 40. Secretaría de Estado de Salud Pública y Asistencia Social: Manual de Sectorización/Zonificación de las UNAP. In Salud CEplRdS, editor. Santo Domingo: Secretaría de Estado de Salud Pública y Asistencia Social; 2008. p. 78. Secretaría de Estado de Salud Pública y Asistencia Social: Perfil del sistema de salud de la República dominicana. In: Salud SdEdSPyASCEplRdS, editor. Santo Domingo: Secretaría de Estado de Salud Pública y Asistencia Social/Comisión Ejecutiva para la Reforma del Sector Salud/Organización Panamericana de la Salud; 2007. p. 44. Ministère de la santé publique et de la population: Plan directeur de santé 2012–2022. In: population Mdlspedl, editor. Port-au-Prince; 2013. Wooding B. El impacto del terremoto en Haití sobre la inmigración haitiana en república dominicana. América Latina Hoy. 2010;56:111–29. Organización Panamericana de la Salud: Cooperación binacional entre Haití y la República Dominicana. In: Salud OPdl, editor. Organización Panamericana de la Salud; 2011. Dilla Alfonso H. Transborder Urban Complex in Latin America. Estudios Fronterizos. 2015;16(31):4–19. Dilla Alfonso H. Los complejos urbanos transfronterizos en América Latina. Estudios fronterizos. 2015;16(31):15–38. Dilla Alfonso H, de Jesús Cedano S. De problemas y oportunidades: intermediación urbana fronteriza en República Dominicana. Revista mexicana de sociología. 2005;67:99–126. Buzenot L: Les zones franches industrielles d'exportation dans la Caraïbe. Les causes économiques de leur émergence. Études Caribéennes 2010(13). INESA, FLACSO: Inventario de los conocimientos e intervenciones sobre la zona transfronteriza Haití-República Dominicana. In: Santo Domingo/Haití: PNUD/ACDI; 2003. Observatorio Binacional sobre Medio Ambiente M, Educación y Comercio,: Diagnóstico comercio bilateral República Dominicana y República de Haitï. In. República Dominicana/Haiti: Observatorio Binacional sobre Medio Ambiente, Migración, Educación y Comercio (OBMEC); 2016. Dilla Alfonso H. República Dominicana: La nueva cartografía transfronteriza. Caribbean Studies. 2007;35(1):181–205. Dilla Alfonso H: La migración transfronteriza urbana en la República Dominicana. In: Santo Domingo: Fundación Friedrich Ebert en República Dominicana; 2011. Icart J-C. Cela ne se fait pas! Développements récents dans le dossier des migrations et de l'apatridie en République dominicaine. Conjonction La revue franco-haïtienne de l'Institut Français d'Haïti. 2014;226:160–82. Higgs G. A literature review of the use of GIS-based measures of access to health care services. Health Serv Outcomes Res Methodol. 2004;5(2):119–39. Apparicio P, Gelb J, Dubé A-S, Kingham S, Gauvin L, Robitaille É. The approaches to measuring the potential spatial access to urban health services revisited: distance types and aggregation-error issues. Int J Health Geogr. 2017;16(1):32. McGrail MR, Humphreys JS. Measuring spatial accessibility to primary care in rural areas: improving the effectiveness of the two-step floating catchment area method. Appl Geogr. 2009;29(4):533–41. Luo W, Wang F. Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. Environ Plan B Plan Des. 2003;30:865–84. Wan N, Zou B, Sternberg T. A three-step floating catchment area method for analyzing spatial access to health services. Int J Geogr Inf Sci. 2012;26(6):1073–89. Oppong JR, Hodgson MJ. Spatial accessibility to health care facilities in Suhum District, Ghana. Prof Geogr. 1994;46(2):199–209. Murawski L, Church RL. Improving accessibility to rural health services: the maximal covering network improvement problem. Socio-Econ Plan Sci. 2009;43(2):102–10. Querriau X, Peeters D, Thomas I, Kissiyar M: Localisation optimale d'unités de soins dans un pays en voie de développement: Analyse de sensibilité. CyberGeo. 2004. Bochaton A: " Paï Thaï, paï fang nan":" Aller en Thaïlande, aller de l'autre côté". Construction d'un espace sanitaire transfrontalier: le recours aux soins des Laotiens en Thaïlande. Paris 10; 2009. Tapia Ladino M, Liberona Concha N, Contreras Gatica Y. El surgimiento de un territorio circulatorio en la frontera chileno-peruana: estudio de las prácticas socio-espaciales fronterizas. Revista de Geografía Norte Grande. 2017;66:117–41. DM is the principal investigator of the study. She carried out the GIS, statistical and mapping analyses. PA revised all the statistical and mapping analyses. PA and UL jointly drafted and critically revised the paper. All authors read and approved the final manuscript. The authors would like to thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions. The author(s) declare that they have no competing interests. Please contact author for data requests. The authors provide full consent for publishing the manuscript. The authors are grateful for the financial support provided by the Canada Research Chair in Environmental Equity. Environmental Equity Laboratory, INRS Centre Urbanisation Culture Société, 385, rue Sherbrooke Est, Montréal, Québec, H2X 1E3, Canada Dominique Mathon & Philippe Apparicio Département d'études urbaines et touristiques, Université du Québec à Montréal, Case postale 8888, Succursale Centre-Ville, Montréal, Québec, H3C 3P8, Canada Ugo Lachapelle Search for Dominique Mathon in: Search for Philippe Apparicio in: Search for Ugo Lachapelle in: Correspondence to Philippe Apparicio. Mathon, D., Apparicio, P. & Lachapelle, U. Cross-border spatial accessibility of health care in the North-East Department of Haiti. Int J Health Geogr 17, 36 (2018) doi:10.1186/s12942-018-0156-6 Spatial accessibility Enhanced two-step floating catchment area
CommonCrawl
Asymptotic Analysis - Volume 24, issue 2 Robustness of boundary feedback controls for a flexible beam with respect to perturbation Authors: Liu, Wei‐Jiu Abstract: In this paper we test the robustness of a nonlinear boundary feedback control for a flexible beam with respect to perturbation. We show that additional dynamics of perturbation at one of the ends of the beam, as long as they are strictly passive, will not destabilize the controlled beam. Keywords: Flexible beam, robustness, stability, Lyapunov method Citation: Asymptotic Analysis, vol. 24, no. 2, pp. 91-105, 2000 Propagation and absorption of concentration effects near shock hypersurfaces for the heat equation Authors: Fermanian Kammerer, C. Abstract: In this paper we study families of solutions to heat equation with a small parameter and a propagation term consisting in a discontinuous vector field b through a smooth compact hypersurface S. Our purpose is to describe the evolution of the energy density as h goes to 0 of a family of solutions for a bounded square integrable family of inital data. Outside of S it is classical to calculate this limit by using semi‐classical measures associated with the family of solutions. The discontinuity of b through S induces a difficulty that we overcome provided a second microlocalization. We introduce …two‐microlocal items describing the concentration of a square integrable bounded sequence on a hypersurface. By using these items we calculate for convenient times semi‐classical measures of the family of solutions in the whole cotangent space and the limit of the energy density as small parameter goes to 0. Show more Citation: Asymptotic Analysis, vol. 24, no. 2, pp. 107-141, 2000 Homogenization of a neutronic multigroup evolution model Authors: Capdeboscq, Yves Abstract: In this paper is studied the homogenization of an evolution problem for a cooperative system of weakly coupled elliptic partial differential equations, called neutronic multigroup diffusion model, in a periodic heterogenous domain. Such a model is used for studying the evolution of the neutron flux in nuclear reactor core. In this paper, we show that under a symmetry assumption, the oscillatory behavior of the solutions is controled by the first eigenvector of a multigroup eigenvalue problem posed in the periodicity cell, whereas the global trend is asymptotically given by a homogenized evolution problem. We then turn to cases when the …symmetry condition is not fulfilled. In domains without boundaries, the limit equation for the global trend is then a homogenized transport equation. Alternatively, we show that in bounded domains and with well prepared initial data, the microscopic scale does not only control the oscillatory behavior of the solutions, but also induces an exponential drift. Show more Constructive Borel–Ritt interpolation results for functions of several variables Authors: Hernández, Jesús A. | Sanz, Javier Abstract: We give constructive proofs for two Borel–Ritt interpolation results, stating the existence of holomorphic functions on polysectors admitting an arbitrarily prescribed asymptotic expansion, whether in the sense of Gérard–Sibuya or in that of Majima. As a consequence, a new proof is obtained for the classical Borel's theorem on the existence of 𝒞∞ functions on $\mathbb{R}^{n}$ with given derivatives at 0; in fact, our construction provides functions analytic in $(\mathbb{R}-\{0\})^{n}$ .
CommonCrawl
Physics in Industrial Instrumentation Physics and Chemistry Terminology and Definitions Metric Prefixes and Prefix Scale Understanding Area, Volume and Common Geometry Unit Conversions and Physical Constants The International System of Units Conservation Laws What are Simple Machines? Elementary Thermodynamics Chapter 2 - Physics in Industrial Instrumentation A fluid is any substance having the ability to flow: to freely change shape and move under the influence of a motivating force. Fluid motion may be analyzed on a microscopic level, treating each fluid molecule as an individual projectile body. This approach is extraordinarily tedious on a practical level, but still useful as a model of fluid behavior. Some fluid properties are accurately predicted by this model, especially predictions dealing with potential and kinetic energies. However, the ability of a fluid's molecules to independently move give it unique properties that solids do not possess. One of these properties is the ability to effortlessly transfer pressure, defined as force applied over area. The common phases of matter are solid, liquid, and gas. Liquids and gases are fundamentally distinct from solids in their intrinsic inability to maintain a fixed shape. In other words, liquids and gases tend to fill whatever solid containers they are held in. Similarly, both liquids and gases both have the ability to flow, which is why they are collectively called fluids. Due to their lack of definite shape, fluids tend to disperse any force applied to them. This stands in marked contrast to solids, which tend to transfer force only in the applied direction. Take for example the force transferred by a nail, from a hammer to a piece of wood: The impact of the hammer's blow is directed straight through the solid nail into the wood below – nothing surprising here. Now consider what a fluid would do when subjected to the same hammer blow: Given the freedom of a fluid's molecules to move about, the impact of the hammer blow becomes directed everywhere against the inside surface of the container (the cylinder). This is true for all fluids: liquids and gases alike. The only difference between the behavior of a liquid versus a gas in the same scenario is that the gas will compress (i.e. the piston will move down as the hammer struck it), whereas the liquid will not compress (i.e. the piston will remain in its resting position). Gases yield under pressure, liquids do not. It is very useful to quantify force applied to a fluid in terms of force per unit area, since the force applied to a fluid becomes evenly dispersed in all directions to the surface containing it. This is the definition of pressure (\(P\)): the amount of force (\(F\)) distributed across a given area (\(A\)). \[P = {F \over A}\] In the metric system, the standard unit of pressure is the pascal (Pa), defined as one Newton (N) of force per square meter (m\(^{2}\)) of area. In the British system of measurement, the standard unit of pressure is the PSI: pounds (lb) of force per square inch (in\(^{2}\)) of area. Pressure is often expressed in units of kilopascals (kPa) when metric units are used because one pascal is a rather small pressure for most engineering applications. The even distribution of force throughout a fluid has some very practical applications. One application of this principle is the hydraulic lift, which functions somewhat like a fluid lever: Force applied to the small piston creates a pressure throughout the fluid. That pressure exerts a greater force on the large piston than what is exerted on the small piston, by a factor equal to the ratio of piston areas. Since area for a circular piston is proportional to the square of the radius (\(A = \pi r^2\)), even modest ratios of piston diameter yield large ratios of area and therefore of force. If the large piston has five times the area of the small piston (i.e. the large piston's diameter is 2.236 times greater than the small piston's diameter), force will be multiplied five-fold. Just as with the lever, however, there must be a trade-off so as to not violate the Conservation of Energy. The trade-off for increased force is decreased distance, whether in the lever system or in the hydraulic lift system. If the large piston generates a force five times greater than what is applied to the small piston, it must move only one-fifth as far as the small piston's motion. In this way, energy in equals energy out (remember that work, which is equivalent to energy, is calculated by multiplying force by parallel distance traveled). For those familiar with electricity, what you see here in either the lever system or the hydraulic lift is analogous to a transformer: we can step AC voltage up, but only by reducing AC current. Being a passive device, a transformer cannot boost power. Therefore, power out can never be greater than power in, and given a perfectly efficient transformer, power out will always be precisely equal to power in: \[\hbox{Power} = (\hbox{Voltage in}) (\hbox{Current in}) = (\hbox{Voltage out}) (\hbox{Current out})\] \[\hbox{Work} = (\hbox{Force in}) (\hbox{Distance in}) = (\hbox{Force out}) (\hbox{Distance out})\] Fluid may be used to transfer power just as electricity is used to transfer power. Such systems are called hydraulic if the fluid is a liquid (usually oil), and pneumatic if the fluid is a gas (usually air). In either case, a machine (pump or compressor) is used to generate a continuous fluid pressure, pipes are used to transfer the pressurized fluid to the point of use, and then the fluid is allowed to exert a force against a piston or a set of pistons to do mechanical work: To learn more about fluid power systems, refer to the beginning of the page. An interesting use of fluid we see in the field of instrumentation is as a signaling medium, to transfer information between places rather than to transfer power between places. This is analogous to using electricity to transmit voice signals in telephone systems, or digital data between computers along copper wire. Here, fluid pressure represents some other quantity, and the principle of force being distributed equally throughout the fluid is exploited to transmit that representation to some distant location, through piping or tubing: This illustration shows a simple temperature-measuring system called a filled bulb, where an enclosed bulb filled with fluid is exposed to a temperature that we wish to measure. A rise in temperature makes the fluid expand and thereby increases pressure sensed at the gauge. The purpose of the fluid here is two-fold: first to sense temperature, and second to relay this temperature measurement a long distance away to the gauge. The principle of even pressure distribution allows the fluid to act as a signal medium to convey the information (bulb temperature) to a distant location. Pascal's Principle and hydrostatic pressure We learned earlier that fluids tend to evenly distribute any applied force. This fundamental principle is the basis of fluid power and fluid signaling systems, where pressure is assumed to be transferred equally to all points in a confined fluid. In the example of a hydraulic lift given earlier, we assume that the pressure throughout the fluid pathway is equal: If additional force is applied to the small piston (say, 160 lbs instead of 150 lbs), the fluid pressure throughout the system will increase, not just the fluid pressure in the vicinity of the piston. The effect of this additional force will be immediately "felt" at all points of the system. The phenomenon of pressure changes being evenly distributed throughout an enclosed fluid is called Pascal's principle. Pascal's principle is really nothing more than the direct consequence of fluids' ability to flow. The only way an additional applied pressure would not be transmitted to all points within a confined fluid volume is if the fluid molecules were somehow not free to move. Since they are mobile, any compression applied to one region of that fluid will propagate to all other regions within that fluid volume. As fluid molecules are subjected to greater pressure, they naturally try to migrate to regions of lower pressure where they "bump up" against other fluid molecules, distributing that increased pressure in doing so. Pascal's principle tells us any change in applied pressure to a confined fluid will be distributed evenly throughout, but it does not say pressure will be the same throughout all points. If forces other than those applied to pistons exert pressure on the fluid, we may indeed experience gradients of pressure throughout a confined fluid volume. In cases where we are dealing with tall columns of dense fluid, there is another force we must consider: the weight of the fluid itself. Suppose we took a cubic foot of water which weighs approximately 62.4 pounds, and poured it into a very tall vertical tube with a cross-sectional area of 1 square inch: Naturally, we would expect the pressure measured at the bottom of this tall tube to be 62.4 pounds per square inch, since the entire column of water (weighing 62.4 pounds) has its weight supported by one square inch of area. If we placed another pressure gauge mid-way up the tube, though, how much pressure would it register? At first you might be inclined to say 62.4 PSI as well, because you learned earlier in this lesson that fluids naturally distribute force throughout their bulk. However, in this case the pressure is not the same mid-way up the column as it is at the bottom: The reason for this apparent discrepancy is that the source of pressure in this fluid system comes from the weight of the water column itself. Half-way up the column, the water only experiences half the total weight (31.2 pounds), and so the pressure is half of what it is at the very bottom. We did not consider this effect before, because we assumed the force exerted by the piston in the hydraulic lift was so large it "swamped" the weight of the fluid itself. Here, with our very tall column of water (144 feet tall!), the effect of gravity upon the water's mass is quite substantial. Indeed, without a piston to exert an external force on the water, weight is the only source of force we have to consider when calculating pressure. This fact does not invalidate Pascal's principle. Any change in pressure applied to the fluid column will still be distributed equally throughout. For example, if we were to place a piston at the top of this fluid column and apply a force to the fluid, pressure at all points in that fluid column would increase by the same amount. This is not the same as saying all pressures will be equal throughout the column, however. An interesting fact about pressure generated by a column of fluid is that the width or shape of the containing vessel is irrelevant: the height of the fluid column is the only dimension we need to consider. Examine the following tube shapes, all connected at the bottom: Since the force of fluid weight is generated only along the axis of gravitational attraction (straight down), that is the only axis of measurement important in determining "hydrostatic" fluid pressure. The fixed relationship between the vertical height of a water column and pressure is such that sometimes water column height is used as a unit of measurement for pressure. That is, instead of saying "30 PSI," we could just as correctly quantify that same pressure as 830.4 inches of water ("W.C. or "H\(_{2}\)O), the conversion factor being approximately 27.68 inches of vertical water column per PSI. As one might guess, the density of the fluid in a vertical column has a significant impact on the hydrostatic pressure that column generates. A liquid twice as dense as water, for example, will produce twice the pressure for a given column height. For example, a column of this liquid (twice as dense as water) 14 inches high will produce a pressure at the bottom equal to 28 inches of water (28 "W.C.), or just over 1 PSI. An extreme example is liquid mercury, which is over 13.5 times as dense as water. Due to its exceptional density and ready availability, the height of a mercury column is also used as a standard unit of pressure measurement. For instance, 25 PSI could be expressed as 50.9 inches of mercury ("Hg), the conversion factor being approximately 2.036 inches of vertical mercury column per PSI. The mathematical relationship between vertical liquid height and hydrostatic pressure is quite simple, and may be expressed by either of the following formulae: \[P = \rho g h\] \[P = \gamma h\] \(P\) = Hydrostatic pressure in units of weight per square area unit: pascals (N/m\(^{2}\)) or lb/ft\(^{2}\) \(\rho\) = Mass density of liquid in kilograms per cubic meter (metric) or slugs per cubic foot (British) \(g\) = Acceleration of gravity (9.81 meters per second squared or 32.2 feet per second squared) \(\gamma\) = Weight density of liquid in newtons per cubic meter (metric) or pounds per cubic foot (British) \(h\) = Vertical height of liquid column Dimensional analysis – where we account for all units of measurement in a formula – validates the mathematical relationship between pressure, density, and height. Taking the second formula as an example: \[\left[\hbox{lb} \over \hbox{ft}^2\right] = \left[ \hbox{lb} \over \hbox{ft}^3 \right] \left[\hbox{ft} \over 1 \right]\] As you can see, the unit of "feet" in the height term cancels out one of the "feet" units in the denominator of the density term, leaving an answer for pressure in units of pounds per square foot. If one wished to set up the problem so the answer presented in a more common pressure unit such as pounds per square inch, both the liquid density and height would have to be expressed in appropriate units (pounds per cubic inch and inches, respectively). Applying this to a realistic problem, consider the case of a tank filled with 8 feet (vertical) of castor oil, having a weight density of 60.5 pounds per cubic foot: This is how we would set up the formula to calculate for hydrostatic pressure at the bottom of the tank: \[P = \left({60.5 \hbox{ lb} \over \hbox{ft}^3}\right) \left(8 \hbox{ ft}\right)\] \[P = {484 \hbox{ lb} \over \hbox{ft}^2}\] If we wished to convert this result into a more common unit such as PSI (pounds per square inch), we could do so using an appropriate fraction of conversion units: \[P = \left({484 \hbox{ lb} \over \hbox{ft}^2}\right) \left({1 \hbox{ ft}^2 \over 144 \hbox{ in}^2}\right)\] \[P = {3.36 \hbox{ lb} \over \hbox{in}^2} = 3.36 \hbox{ PSI}\] Fluid density expressions The density of any substance is defined as the ratio of its mass or weight to the volume occupied by that mass or weight. Common expressions of density include pounds per cubic foot (British units) and kilograms per cubic meter (metric units). When the substance in question is a liquid, a common form of expression for density is a ratio of the liquid's density to the density of pure water at standard temperature. This ratio is known as specific gravity. For example, the specific gravity of glycerin may be determined by dividing the density of glycerin by the density of water: \[\hbox{Specific gravity of any liquid} = {D_{liquid} \over D_{water}}\] \[\hbox{Specific gravity of glycerin} = {D_{glycerin} \over D_{water}} = { 78.6 \hbox{ lb/ft}^3 \over 62.4 \hbox{ lb/ft}^3} = 1.26\] The density of gases may also be expressed in ratio form, except the standard of comparison is ambient air instead of water. Chlorine gas, for example, has a specific gravity of 2.47 (each volumetric unit of chlorine having 2.47 times the mass of the same volume of air under identical temperature and pressure conditions). Specific gravity values for gases are sometimes called relative gas densities to avoid confusion with "specific gravity" values for liquids. As with all ratios, specific gravity is a unitless quantity. In our example with glycerine, we see how the identical units of pounds per cubic foot cancel out of both numerator and denominator, to leave a quotient with no unit at all. An alternative to expressing fluid density as a ratio of mass (or weight) to volume, or to compare it against the density of a standard fluid such as pure water or air, is to express it as the ratio of volume to mass. This is most commonly applied to vapors such as steam, and it is called specific volume. The relationship between specific volume and density is one of mathematical reciprocation: the reciprocal of density (e.g. pounds per cubic foot) is specific volume (e.g. cubic feet per pound). For example, consulting a table of saturated steam properties, we see that saturated steam at a pressure of 60 PSIA has a specific volume of 7.175 cubic feet per pound. Translating this into units of pounds per cubic feet, we reciprocate the value 7.175 to arrive at 0.1394 pounds per cubic foot. Industry-specific units of measurement also exist for expressing the relative density of a fluid. These units of measurement all begin with the word "degree" much the same as for units of temperature measurement, for example: Degrees API (used in the petroleum industries) Degrees Baumé (used in a variety of industries including paper manufacture and alcohol production) Degrees Twaddell (used in the textile industry for tanning solutions and the like) The mathematical relationships between each of these "degree" units of density versus specific gravity is as follows: \[\hbox{Degrees API} = {141.5 \over \hbox{Specific gravity}} - 131.5\] \[\hbox{Degrees Twaddell} = 200 \times (\hbox{Specific gravity} - 1)\] Two different formulae exist for the calculation of degrees Baumé, depending on whether the liquid in question is heavier or lighter than water. For lighter-than-water liquids: \[\hbox{Degrees Baum\'e (light)} = {140 \over \hbox{Specific gravity}} - 130\] Note that pure water would measure 10\(^{o}\) Baumé on the light scale. As liquid density decreases, the light Baumé value increases. For heavier-than-water liquids: \[\hbox{Degrees Baum\'e (heavy)} = 145 - {145 \over \hbox{Specific gravity}}\] Note that pure water would measure 0\(^{o}\) Baumé on the heavy scale. As liquid density increases, the heavy Baumé value increases. Just to make things confusing, there are different standards for the heavy Baumé scale. Instead of the constant value 145 shown in the above equation (used throughout the United States of America), an older Dutch standard used the same formula with a constant value of 144. The Gerlach heavy Baumé scale uses a constant value of 146.78: \[\hbox{Degrees Baum\'e (heavy, old Dutch)} = 144 - {144 \over \hbox{Specific gravity}}\] \[\hbox{Degrees Baum\'e (heavy, Gerlach scale)} = 146.78 - {146.78 \over \hbox{Specific gravity}}\] There exists a seemingly endless array of "degree" scales used to express liquid density, scattered throughout the pages of history. For the measurement of sugar concentrations in the food industries, the unit of degrees Balling was invented. This scale was later revised to become the unit of degrees Brix, which is directly proportional to the percent concentration of sugar in the liquid. Another density scale used for expressing sugar concentration is degrees Plato. The density of tanning liquor may be measured in degrees Bark. Milk density may be measured in degrees Soxhlet. Vegetable oil density (and in older times, the density of oil extracted from sperm whales) may be measured in degrees Oleo. Expressing fluid pressure in terms of a vertical liquid column makes perfect sense when we use a very simple kind of motion-balance pressure instrument called a manometer. A manometer is nothing more than a piece of clear (glass or plastic) tubing filled with a liquid of known density, situated next to a scale for measuring distance. The most basic form of manometer is the U-tube manometer, shown here: The basis for all manometers is the mathematical relationship between a liquid's density (\(\rho\) in mass units or \(\gamma\) in weight units) and vertical height. The diameter of the manometer tubes is irrelevant: Pressure is read on the scale as the difference in height (\(h\)) between the two liquid columns. One nice feature of a manometer is it really cannot become "uncalibrated" so long as the fluid is pure and the assembly is maintained in an upright position. If the fluid used is water, the manometer may be filled and emptied at will, and even rolled up for storage if the tubes are made of flexible plastic. We may create even more sensitive manometers by purposely inclining one or more of the tubes, so that the liquid must travel a farther distance along the tube length to achieve the same vertical shift in height. This has the effect of "amplifying" the liquid's motion to make it easier to resolve small pressures: This way, a greater motion of liquid (\(x\)) is required to generate the same hydrostatic pressure (vertical liquid displacement, \(h\)) than in an upright manometer, making the inclined manometer more sensitive. As the similar triangle in the illustration shows, \(x\) and \(h\) are related trigonometrically by the sine function: \[\sin \theta = {h \over x}\] The difference in fluid column positions measured diagonally along the scale (\(x\)) must always be greater than the vertical height difference between the two columns (\(h\)) by a factor of \(1 \over {\sin \theta}\), which will always be greater than one for angles less than 90\(^{o}\). The smaller the angle \(\theta\), the greater the ratio between \(x\) and \(h\), leading to more sensitivity. If even more sensitivity is desired, we may construct something called a micromanometer, consisting of a gas bubble trapped in a clear horizontal tube between two large vertical manometer chambers: Pressure applied to the top of either vertical chamber will cause the vertical liquid columns to shift just the same as any U-tube manometer. However, the bubble trapped in the clear horizontal tube will move much farther than the vertical displacement of either liquid column, owing to the huge difference in cross-sectional area between the vertical chambers and the horizontal tube. This amplification of motion is analogous to the amplification of motion in a hydraulic piston system (where the smaller piston moves farther than the larger piston), and makes the micromanometer exceptionally sensitive to small pressures. The movement of the gas bubble within the clear horizontal viewing tube (\(x\)) relates to applied pressure by the following formula: \[x = {{\gamma h} A_{large} \over {2 A_{small}}}\] Using water as the working liquid in a standard U-tube manometer, 1 PSI of applied gas pressure results in approximately 27.7 inches of vertical liquid column displacement (i.e. 27.7 inches of height difference between the two water columns). This relatively large range of motion limits the usefulness of water manometers to modest pressures only. If we wished to use a water manometer to measure the pressure of compressed air in an industrial pneumatic supply system at approximately 100 PSI, the manometer would have to be in excess of 230 feet tall! Clearly, a water manometer would not be the proper instrument to use for such an application. However, water is not the only viable liquid for use in manometers. We could take the exact same clear U-tube and fill it partially full of liquid mercury instead, which is substantially denser than water. In a mercury manometer, 1 PSI of applied gas pressure results in very slightly more than 2 inches of liquid column displacement. A mercury manometer applied to the task of measuring air pressure in an industrial pneumatic system would only have to be 17 feet tall – still quite large and cumbersome for a measuring instrument, but not impossible to construct or to use. A common form of manometer seen in industrial instrument calibration shops is the well type, consisting of a single vertical tube and a relatively large reservoir (called the "well") acting as the second column: Due to the well's much larger cross-sectional area, liquid motion inside of it is negligible compared to the motion of liquid inside the clear viewing tube. For all practical purposes, the liquid level inside the "well" is constant, and so the liquid inside the tube moves the full distance equivalent to the applied pressure. Thus, the well manometer provides an easier means of reading pressure: no longer does one have to measure the difference of height between two liquid columns, only the height of a single column. Systems of pressure measurement Pressure measurement is often a relative thing. When we say there is 35 PSI of air pressure in an inflated car tire, what we mean is that the pressure inside the tire is 35 pounds per square inch greater than the surrounding, ambient air pressure. It is a fact that we live and breathe in a pressurized environment. Just as a vertical column of liquid generates a hydrostatic pressure, so does a vertical column of gas. If the column of gas is very tall, the pressure generated by it will be substantial. Such is the case with Earth's atmosphere, the pressure at sea level caused by the weight of the atmosphere being approximately 14.7 PSI. You and I do not perceive this constant air pressure around us because the pressure inside our bodies is equal to the pressure outside our bodies. Thus our eardrums, which serve as differential pressure-sensing diaphragms, detect no difference of pressure between the inside and outside of our bodies. The only time the Earth's air pressure becomes perceptible to us is if we rapidly ascend or descend, where the pressure inside our bodies does not have time to equalize with the pressure outside, and we feel the force of that differential pressure on our eardrums. If we wish to speak of a fluid pressure in terms of how it compares to a perfect vacuum (absolute zero pressure), we specify it in terms of absolute units. For example, when I said earlier that the atmospheric pressure at sea level was 14.7 PSI, what I really meant is it is 14.7 PSIA (pounds per square inch absolute), meaning 14.7 pounds per square inch greater than a perfect vacuum. When I said earlier that the air pressure inside an inflated car tire was 35 PSI, what I really meant is it was 35 PSIG (pounds per square inch gauge), meaning 35 pounds per square inch greater than ambient air pressure. The qualifier "gauge" implies the pressure indicated by a pressure-measuring gauge, which in most cases works by comparing the sample fluid's pressure to that of the surrounding atmosphere. When units of pressure measurement are specified without a "G" or "A" suffix, "gauge" pressure is usually assumed. Gauge and absolute pressure values for some common fluid pressures are shown in this table: & at sea level & \cr (9.8 PSI vacuum) & under idle conditions & \cr (14.7 PSI vacuum) & (no fluid molecules present) & \cr Gauge pressure Fluid example Absolute pressure 90 PSIG Bicycle tire air pressure 104.7 PSIA 35 PSIG Automobile tire air pressure 49.7 PSIA 0 PSIG Atmospheric pressure 14.7 PSIA $-9.8$ PSIG Engine manifold vacuum 4.9 PSIA $-14.7$ PSIG Perfect vacuum 0 PSIA Note that the only difference between each of the corresponding gauge and absolute pressures is an offset of 14.7 PSI, with absolute pressure being the larger (more positive) value. This offset of 14.7 PSI between absolute and gauge pressures can be confusing if we must convert between different pressure units. Suppose we wished to express the tire pressure of 35 PSIG in units of inches of water column ("W.C.). If we stay in the gauge-pressure scale, all we have to do is multiply by 27.68: \[{{35 \> \hbox{\sout{PSI}}} \over 1} \times {{27.68 \> \hbox{"W.C.}} \over {1 \> \hbox{\sout{PSI}}}} = 968.8 \> \hbox{"W.C.}\] Note how the fractions have been arranged to facilitate cancellation of units. The "PSI" unit in the numerator of the first fraction cancels with the "PSI" unit in the denominator of the second fraction, leaving inches of water column ("W.C.) as the only unit standing. Multiplying the first fraction (35 PSI over 1) by the second fraction (27.68 "W.C. over 1 PSI) is "legal" to do since the second fraction has a physical value of unity (1): being that 27.68 inches of water column is the same physical pressure as 1 PSI, the second fraction is really the number "1" in disguise. As we know, multiplying any quantity by unity does not change its value, so the result of 968.8 "W.C. we get has the exact same physical meaning as the original figure of 35 PSI. This technique of unit conversion is sometimes known as unity fractions, and it is discussed in more general terms in another section of this book (refer to the beginning of the page ). If, however, we wished to express the car's tire pressure in terms of inches of water column absolute (in reference to a perfect vacuum), we would have to include the 14.7 PSI offset in our calculation, and do the conversion in two steps: \[35 \> \hbox{PSIG} + 14.7 \> \hbox{PSI} = 49.7 \> \hbox{PSIA}\] \[{{49.7 \> \hbox{\sout{PSIA}}} \over 1} \times {{27.68 \> \hbox{"W.C.A}} \over {1 \> \hbox{\sout{PSIA}}}} = 1375.7 \> \hbox{"W.C.A}\] The ratio between inches of water column and pounds per square inch is still the same (27.68:1) in the absolute scale as it is on the gauge scale. The only difference is that we included the 14.7 PSI offset in the very beginning to express the tire's pressure on the absolute scale rather than on the gauge scale. From then on, all conversions were performed in absolute units. This two-step conversion process is not unlike converting between different units of temperature (degrees Celsius versus degrees Fahrenheit), and for the exact same reason. To convert from \(^{o}\)F to \(^{o}\)C, we must first subtract an offset of 32 degrees, then multiply by \(5 \over 9\). The reason an offset is involved in this temperature conversion is because the two temperature scales do not share the same "zero" point: 0 \(^{o}\)C is not the same temperature as 0 \(^{o}\)F. Likewise, 0 PSIG is not the same pressure as 0 PSIA, and so an offset is always necessary to convert between gauge and absolute pressure units. As seen with the unit of pounds per square inch (PSI), the distinction between gauge and absolute pressure is typically shown by a lettered suffix "G" or "A" following the unit, respectively. Following this convention, we may encounter other units of pressure measurement qualified as either gauge or absolute by these letters: kPaA (kilopascals absolute), inches HgG (inches of mercury gauge), inches W.C.A (inches of water column absolute), etc. There are some pressure units that are always in absolute terms, and as such require no letter "A" to specify. One is the unit of atmospheres, 1 atmosphere being 14.7 PSIA. There is no such thing as "atmospheres gauge" pressure. For example, if we were given a pressure as being 4.5 atmospheres and we wanted to convert that into pounds per square inch gauge (PSIG), the conversion would be a two-step process: \[{{4.5 \> \hbox{\sout{atm}}} \over 1} \times {{14.7 \> \hbox{PSIA}} \over {1 \> \hbox{\sout{atm}}}} = 66.15 \> \hbox{PSIA}\] \[66.15 \> \hbox{PSIA} - 14.7 \> \hbox{PSI} = 51.45 \> \hbox{PSIG}\] Another unit of pressure measurement that is always absolute is the torr, equal to 1 millimeter of mercury column absolute (mmHgA). 0 torr is absolute zero, equal to 0 atmospheres, 0 PSIA, or \(-14.7\) PSIG. Atmospheric pressure at sea level is 760 torr, equal to 1 atmosphere, 14.7 PSIA, or 0 PSIG. If we wished to convert the car tire's pressure of 35 PSIG into torr, we would once again have to offset the initial value to get everything into absolute terms. \[{{49.7 \> \hbox{\sout{PSIA}}} \over 1} \times {{760 \> \hbox{torr}} \over {14.7 \> \hbox{\sout{PSIA}}}} = 2569.5 \> \hbox{torr}\] One last unit of pressure measurement deserves special comment, for it may be used to express either gauge or absolute pressure, yet it is not customary to append a "G" or an "A" to the unit. This unit is the bar, exactly equal to 100 kPa, and approximately equal to 14.5 PSI. Some technical references append a lower-case letter "g" or "a" to the word "bar" to show either gauge pressure (barg) or absolute pressure (bara), but this notation seems no longer favored. Modern usage typically omits the "g" or "a" suffix in favor of context: the word "gauge" or "absolute" may be included in the expression to clarify the meaning of "bar." Sadly, many references fail to explicitly declare either "gauge" or "absolute" when using units of bar, leaving the reader to interpret the intended context. Despite this ambiguity, the bar is frequently used in European literature as a unit of pressure measurement. If a chamber is completely evacuated of any and all fluid molecules such that it contains nothing but empty space, we say that it contains a perfect vacuum. With no fluid molecules inside the chamber whatsoever, there will be no pressure exerted on the chamber walls by any fluid. This is the defining condition of zero absolute pressure (e.g. 0 PSIA, 0 torr, 0 atmospheres, etc.). Referencing atmospheric air pressure outside of this vessel, we could say that the "gauge" pressure of a perfect vacuum is \(-14.7\) PSIG. A commonly-taught principle is that a perfect vacuum is the lowest pressure possible in any physical system. However, this is not strictly true. It is, in fact, possible to generate pressures below 0 PSIA – pressures that are actually less than that of a perfect vacuum. The key to understanding this is to consider non-gaseous systems, where the pressure in question exists within a solid or a liquid substance. Let us begin our exploration of this concept by considering the case of weight applied to a solid metal bar: Recall that pressure is defined as force exerted over area. This metal bar certainly has a cross-sectional area, and if a compressive force is applied to the bar then the molecules of metal inside the bar will experience a pressure attempting to force them closer together. Supposing the bar in question measured 1.25 inches wide and thick, its cross-sectional area would be (1.25 in)\(^{2}\), or 1.5625 in\(^{2}\). Applying a force of 80 pounds along the bar's length would set up an internal pressure within the bar of 51.2 pounds per square inch, or 51.2 PSI: Now suppose we reverse the direction of the applied force to the bar, applying tension to the bar rather than compression. If the force is still 80 pounds and the cross-sectional area is still 1.5625 square inches, then the internal pressure inside the bar must be \(-51.2\) PSI: The negative pressure value describes the tensile force experienced by the molecules of metal inside the bar: a degree of force per unit area attempting to pull those molecules apart from each other rather than push them closer together as was the case with a compressive force. If you believe that the lowest possible pressure is a perfect vacuum (0 PSIA, or \(-14.7\) PSIG), then this figure of \(-51.2\) PSI seems impossible. However, it is indeed possible because we are dealing with a solid rather than with a gas. Gas molecules exert pressure on a surface by striking that surface and exerting a force by the momentum of their impact. Since gas molecules can only strike (i.e. push) against a surface, and cannot pull against a surface, one cannot generate a negative absolute pressure using a gas. In solids, however, the molecules comprising the sample exhibit cohesion, allowing us to set up a tension within that material impossible in a gaseous sample where there is no cohesion between the molecules. Thus, negative pressures are possible within samples of solid material even though they are impossible within gases. Negative pressures are also possible within liquid samples, provided there are no bubbles of gas or vapor anywhere within the sample. Like solids, the molecules within a liquid also exhibit cohesion (i.e. they tend to "stick" together rather than drift apart from each other). If a piston-and-cylinder arrangement is completely filled with liquid, and a tension applied to the movable piston, the molecules within that liquid will experience tension as well. Thus, it is possible to generate negative pressures (below 0 PSIA) within liquids that are impossible with gases. Even vertical columns of liquid may generate negative pressure. The famous British scientists Hooke and Boyle demonstrated a negative pressure of \(-0.2\) MPa (\(-29\) PSI) using a column of liquid mercury. Trees naturally generate huge negative pressures in order to draw water to their full height, up from the ground. Two scientists, H.H. Dixon and J. Joly, presented a scientific paper entitled On the Ascent of Sap in 1895 proposing liquid tension as the mechanism by which trees could draw water up tremendous heights. If even the smallest bubble of gas exists within a liquid sample, however, negative pressures become impossible. Since gases can only exert positive pressures, and Pascal's Principle tells us that pressure will be equally distributed throughout a fluid sample, the low-limit of 0 PSIA for gases establishes a low pressure limit for the entire liquid/gas sample. In other words, the presence of any gas within an otherwise liquid sample prevents the entire sample from experiencing tension. One limitation to the generation of negative pressures within liquids is that disturbances and/or impurities within the liquid may cause that liquid to spontaneously boil (changing phase from liquid to vapor), at which point a sustained negative pressure becomes impossible. When a solid body is immersed in a fluid, it displaces an equal volume of that fluid. This displacement of fluid generates an upward force on the object called the buoyant force. The magnitude of this force is equal to the weight of the fluid displaced by the solid body, and it is always directed exactly opposite the line of gravitational attraction. This is known as Archimedes' Principle. Buoyant force is what makes ships float. A ship sinks into the water just enough so the weight of the water displaced is equal to the total weight of the ship and all it holds (cargo, crew, food, fuel, etc.): If we could somehow measure the weight of that water displaced, we would find it exactly equals the dry weight of the ship: Expressed mathematically, Archimedes' Principle states that the buoyant force is the product of the liquid volume and liquid density: \[F_{buoyant} = \gamma V\] \(F_{b}\) = Buoyant force exerted on object, opposite in direction from gravity \(\gamma\) = Weight density of liquid \(V\) = Volume of liquid displaced by the submerged object We may use dimensional analysis to confirm correct cancellation of British units in the Archimedes' Principle formula: \[[\hbox{lb}] = {[\hbox{lb}] \over [\hbox{ft}^3]} [\hbox{ft}^3]\] Notice how the units of measurement for weight density (pounds per cubic foot) combine with the unit of measurement for volume (cubic feet) to cancel the unit of cubic feet and leave us with force measured in pounds. Archimedes' Principle also explains why hot-air balloons and helium aircraft float. By filling a large enclosure with a gas that is less dense than the surrounding air, that enclosure experiences an upward (buoyant) force equal to the difference between the weight of the air displaced and the weight of the gas enclosed. If this buoyant force equals the weight of the craft and all it holds (cargo, crew, food, fuel, etc.), it will exhibit an apparent weight of zero, which means it will float. If the buoyant force exceeds the weight of the craft, the resultant force will cause an upward acceleration according to Newton's Second Law of motion (\(F = ma\)). Submarines also make use of Archimedes' Principle, adjusting their buoyancy by adjusting the amount of water held by ballast tanks on the hull. Positive buoyancy is achieved by "blowing" water out of the ballast tanks with high-pressure compressed air, so the submarine weighs less (but still occupies the same hull volume and therefore displaces the same amount of water). Negative buoyancy is achieved by "flooding" the ballast tanks so the submarine weighs more. Neutral buoyancy is when the buoyant force exactly equals the weight of the submarine and the remaining water stored in the ballast tanks, so the submarine is able to "hover" in the water with no vertical acceleration or deceleration. An interesting application of Archimedes' Principle is the quantitative determination of an object's density by submersion in a liquid. For instance, copper is 8.96 times as dense as water, with a mass of 8.96 grams per cubic centimeter (8.96 g/cm\(^{3}\)) as opposed to water at 1.00 gram per cubic centimeter (1.00 g/cm\(^{3}\)). If we had a sample of pure, solid copper exactly 1 cubic centimeter in volume, it would have a mass of 8.96 grams. Completely submerged in pure water, this same sample of solid copper would appear to have a mass of only 7.96 grams, because it would experience a buoyant force equivalent to the mass of water it displaces (1 cubic centimeter = 1 gram of water). Thus, we see that the difference between the dry mass (mass measured in air) and the wet mass (mass measured when completely submerged in water) is the mass of the water displaced. Dividing the sample's dry mass by this mass difference (dry \(-\) wet mass) yields the ratio between the sample's mass and the mass of an equivalent volume of water, which is the very definition of specific gravity. The same calculation yields a quantity for specific gravity if weights instead of masses are used, since weight is nothing more than mass multiplied by the acceleration of gravity (\(F_{weight} = mg\)), and the constant \(g\) cancels out of both numerator and denominator: \[\hbox{Specific Gravity} = {m_{dry} \over {m_{dry} - m_{wet}}} = {m_{dry}g \over {m_{dry}g - m_{wet}g}} = {\hbox{Dry weight} \over \hbox{Dry weight} - \hbox{Wet weight}}\] Another application of Archimedes' Principle is the use of a hydrometer for measuring liquid density. If a narrow cylinder of precisely known volume and weight (most of the weight concentrated at one end) is immersed in liquid, that cylinder will sink to a level dependent on the liquid's density. In other words, it will sink to a level sufficient to displace its own weight in fluid. Calibrated marks made along the cylinder's length may then serve to register liquid density in any unit desired. A simple style of hydrometer used to measure the density of lead-acid battery electrolyte is shown in this illustration: To use this hydrometer, you must squeeze the rubber bulb at the top and dip the open end of the tube into the liquid to be sampled. Relaxing the rubber bulb will draw a sample of liquid up into the tube where it immerses the float. When enough liquid has been drawn into the tube to suspend the float so that it neither rests on the bottom of the tapered glass tube or "tops out" near the bulb, the liquid's density may be read at the air/liquid interface. A denser electrolyte liquid results in the float rising to a higher level inside the hydrometer tube: Like all floating objects, the hydrometer float naturally seeks a condition of neutral buoyancy where the weight of the displaced liquid exactly equals the dry weight of the float. If the liquid happens to be very dense, the float will not have to sink very far in order to achieve neutral buoyancy; the less dense the liquid, the deeper the float must sink in order to achieve neutral buoyancy. This means the float's graduated density scale will read less density toward the top and greater density toward the bottom. The following photograph shows a set of antique hydrometers used to measure the density of beer. The middle hydrometer bears a label showing its calibration to be in degrees Baumé (heavy): Liquid density measurement is useful in the alcoholic beverage industry to infer alcohol content. Since alcohol is less dense than water, a sample containing a greater concentration of alcohol (a greater proof rating) will be less dense than a "weaker" sample, all other factors being equal. A less sophisticated version of hydrometer uses multiple balls of differing density. A common application for such a hydrometer is measuring the concentration of "antifreeze" coolant for automobile engines, comprised of a mixture of ethylene glycol and water. Ethylene glycol is a denser compound than water, and so a "stronger" mixture of antifreeze will have a greater bulk density than a "weaker" density of antifreeze. This style of hydrometer yields a crude measurement of ethylene glycol concentration based on the number of balls that float: A greater number of floating balls represents a "stronger" concentration of glycol in the coolant. "Weak" glycol concentrations represent a greater percentage of water in the coolant, with a correspondingly higher freezing temperature. Similar hydrometers are used to measure the concentration of sulfuric acid in lead-acid battery electrolyte, comprised of acid and water. The more fully charged a lead-acid battery is, the higher the concentration of sulfuric acid in the electrolyte fluid. The more discharged a lead-acid battery becomes, the less sulfuric acid (and the more water) is present in the electrolyte. Since sulfuric acid is a denser compound than water, measuring electrolyte density with a hydrometer yields a crude measurement of battery charge state. Gas Laws The Ideal Gas Law relates pressure, volume, molecular quantity, and temperature of an ideal gas together in one concise mathematical expression: \[PV = nRT\] \(P\) = Absolute pressure (atmospheres) \(V\) = Volume (liters) \(n\) = Gas quantity (moles) \(R\) = Universal gas constant (0.0821 L \(\cdot\) atm / mol \(\cdot\) K) \(T\) = Absolute temperature (K) For example, the Ideal Gas Law predicts five moles of helium gas (20 grams worth) at a pressure of 1.4 atmospheres and a temperature of 310 Kelvin will occupy 90.9 liters of volume. An alternative form of the Ideal Gas Law uses the number of actual gas molecules (\(N\)) instead of the number of moles of molecules (\(n\)): \[PV = NkT\] \(P\) = Absolute pressure (Pascals) \(V\) = Volume (cubic meters) \(N\) = Gas quantity (molecules) \(k\) = Boltzmann's constant (1.38 \(\times\) 10\(^{-23}\) J / K) Interestingly, the Ideal Gas Law holds true for any gas. The theory behind this assumption is that gases are mostly empty space: there is far more volume of empty space separating individual gas molecules in a sample than there is space occupied by the gas molecules themselves. This means variations in the sizes of individual gas molecules within any sample is negligible, and therefore the type of gas molecules contained within the sample is irrelevant. Thus, we may apply either form of the Ideal Gas Law to situations regardless of the type of gas involved. This is also why the Ideal Gas Law does not apply to liquids or to phase changes (e.g. liquids boiling into gas): only in the gaseous phase will you find individual molecules separated by relatively large distances. To modify the previous example, where 5 moles of helium gas occupied 90.9 liters at 1.4 atmospheres and 310 Kelvin, it is also true that 5 moles of nitrogen gas will occupy the same volume (90.9 liters) at 1.4 atmospheres and 310 Kelvin. The only difference will be the mass of each gas sample. 5 moles of helium gas (\(^{4}\)He) will have a mass of 20 grams, whereas 5 moles of nitrogen gas (\(^{14}\)N\(_{2}\)) will have a mass of 140 grams. Although no gas in real life is ideal, the Ideal Gas Law is a close approximation for conditions of modest gas density, and no phase changes (gas turning into liquid or vice-versa). You will find this Law appearing again and again in calculations of gas volume and gas flow rates, where engineers and technicians must know the relationship between gas volume, pressure, and temperature. Since the molecular quantity of an enclosed gas is constant, and the universal gas constant must be constant, the Ideal Gas Law may be written as a proportionality instead of an equation: \[PV \propto T\] Several "gas laws" are derived from this proportionality. They are as follows: \[PV = \hbox{Constant \hskip 20pt Boyle's Law (assuming constant temperature } T \hbox{)}\] \[V \propto T \hbox{\hskip 20pt Charles's Law (assuming constant pressure } P \hbox{)}\] \[P \propto T \hbox{\hskip 20pt Gay-Lussac's Law (assuming constant volume } V \hbox{)}\] You will see these laws referenced in explanations where the specified quantity is constant (or very nearly constant). For non-ideal conditions, the "Real" Gas Law formula incorporates a corrected term for the compressibility of the gas: \[PV = ZnRT\] \(Z\) = Gas compressibility factor (unitless) The compressibility factor for an ideal gas is unity (\(Z\) = 1), making the Ideal Gas Law a limiting case of the Real Gas Law. Real gases have compressibility factors less than unity (\(< 1\)). What this means is real gases tend to compress more than the Ideal Gas Law would predict (i.e. occupies less volume for a given amount of pressure than predicted, and/or exerts less pressure for a given volume than predicted). Fluid viscosity Viscosity is a measure of a fluid's resistance to shear. It may be visualized as a sort of internal friction, where individual fluid molecules experience either cohesion or collision while flowing past one another. The more "viscous" a fluid is, the "thicker" it is when stirred. Clean water is an example of a low-viscosity liquid, while liquid honey at room temperature is an example of a high-viscosity liquid. There are two different ways to quantify the viscosity of a fluid: absolute viscosity and kinematic viscosity. Absolute viscosity (symbolized by the Greek symbol "eta" \(\eta\), or sometimes by the Greek symbol "mu" \(\mu\)), also known as dynamic viscosity, is a direct relation between stress placed on a fluid and its rate of deformation (or shear). The textbook definition of absolute viscosity is based on a model of two flat plates moving past each other with a film of fluid separating them. The relationship between the shear stress applied to this fluid film (force divided by area) and the velocity/film thickness ratio is viscosity: \[\eta = {FL \over Av}\] \(\eta\) = Absolute viscosity (pascal-seconds), also symbolized as \(\mu\) \(F\) = Force (newtons) \(L\) = Film thickness (meters) – typically much less than 1 meter for any realistic demonstration! \(A\) = Plate area (square meters) \(v\) = Relative velocity (meters per second) Another common unit of measurement for absolute viscosity is the poise, with 1 poise being equal to 0.1 pascal-seconds. Both units are too large for common use, and so absolute viscosity is often expressed in centipoise. Water has an absolute viscosity of very nearly 1.000 centipoise. Kinematic viscosity (symbolized by the Greek letter "nu" \(\nu\)) includes an assessment of the fluid's density in addition to all the above factors. It is calculated as the quotient of absolute viscosity and mass density: \[\nu = {\eta \over \rho}\] \(\nu\) = Kinematic viscosity (stokes) \(\eta\) = Absolute viscosity (poise) \(\rho\) = Mass density (grams per cubic centimeter) As with the unit of poise, the unit of stokes is too large for convenient use, so kinematic viscosities are often expressed in units of centistokes. Water has a kinematic viscosity of very nearly 1.000 centistokes. The mechanism of viscosity in liquids is inter-molecular cohesion. Since this cohesive force is overcome with increasing temperature, most liquids tend to become "thinner" (less viscous) as they heat up. The mechanism of viscosity in gases, however, is inter-molecular collisions. Since these collisions increase in frequency and intensity with increasing temperature, gases tend to become "thicker" (more viscous) as they heat up. As a ratio of stress to strain (applied force to yielding velocity), viscosity is often constant for a given fluid at a given temperature. Interesting exceptions exist, though. Fluids whose viscosities change with applied stress, and/or over time with all other factors constant, are referred to as non-Newtonian fluids. A simple example of a non-Newtonian fluid is cornstarch mixed with water, which "solidifies" under increasing stress and then returns to a liquid state when the stress is removed. Viscous flow is a condition where friction forces dominate the behavior of a moving fluid, typically in cases where viscosity (internal fluid friction) is great. Inviscid flow, by contrast, is a condition where friction within a moving fluid is negligible and the fluid moves freely. The Reynolds number of a fluid is a dimensionless quantity expressing the ratio between a moving fluid's momentum and its viscosity, and is a helpful gauge in predicting how a fluid stream will move. [Reynolds number] A couple of formulae for calculating Reynolds number of a flow are shown here: \[\hbox{Re} = {{D \overline{v} \rho} \over \mu}\] Re = Reynolds number (unitless) \(D\) = Diameter of pipe, (meters) \(\overline{v}\) = Average velocity of fluid (meters per second) \(\rho\) = Mass density of fluid (kilograms per cubic meter) \(\mu\) = Absolute viscosity of fluid (pascal-seconds) \[\hbox{Re} = {{(3160) G_f Q} \over {D \mu}}\] \(G_f\) = Specific gravity of liquid (unitless) \(Q\) = Flow rate (gallons per minute) \(D\) = Diameter of pipe (inches) \(\mu\) = Absolute viscosity of fluid (centipoise) 3160 = Conversion factor for British units The first formula, with all metric units, is the textbook "definition" for Reynolds number. If you take the time to dimensionally analyze this formula, you will find that all units do indeed cancel to leave the Reynolds number unitless: \[\hbox{Re} = {{[\hbox{m}] \left[{\hbox{m} \over \hbox{s}}\right] \left[{\hbox{kg} \over \hbox{m}^3}\right] \over {[\hbox{Pa} \cdot \hbox{s}]}}}\] Recalling that the definition of a "pascal" is one Newton of force per square meter: \[\hbox{Re} = {\left[{\hbox{kg} \over {\hbox{m} \cdot \hbox{s}}}\right] \over \left[{\hbox{N} \cdot \hbox{s} \over \hbox{m}^2}\right]}\] \[\hbox{Re} = {\left[{\hbox{kg} \over {\hbox{m} \cdot \hbox{s}}}\right] \cdot \left[{\hbox{m}^2 \over \hbox{N} \cdot \hbox{s}}\right]}\] \[\hbox{Re} = \left[{{\hbox{kg} \cdot \hbox{m}} \over {\hbox{N} \cdot \hbox{s}^2}}\right]\] Recalling that the definition of a "newton" is one kilogram times meters per second squared (from Newton's Second Law equation \(F = ma\)): \[\hbox{Re} = \left[{{\hbox{kg} \cdot \hbox{m} \cdot \hbox{s}^2} \over {\hbox{kg} \cdot \hbox{m} \cdot \hbox{s}^2}}\right]\] \[\hbox{Re} = \hbox{\textit{unitless}}\] The second formula given for calculating Reynolds number includes a conversion constant of 3160, which bears the unwieldy unit of "inches-centipoise-minutes per gallon" in order that the units of all variables (flow in gallons per minute, pipe diameter in inches, and viscosity in centipoise) may cancel. Note that specific gravity (\(G_f\)) is unitless and therefore does not appear in this dimensional analysis: \[\hbox{Re} = { {{\left[{\hbox{in} \cdot \hbox{cp} \cdot \hbox{min}} \over \hbox{gal} \right]} \left[{\hbox{gal} \over \hbox{min}}\right]} \over {[\hbox{in} \cdot \hbox{cp}]}}\] You will often find this formula, and the conversion constant of 3160, shown without units at all. Its sole purpose is to make the calculation of Reynolds number easy when working with British units customary in the United States. The Reynolds number of a fluid stream may be used to qualitatively predict whether the flow regime will be laminar or turbulent. Low Reynolds number values predict laminar (viscous) flow, where fluid molecules move in straight "stream-line" paths, and fluid velocity near the center of the pipe is substantially greater than near the pipe walls: High Reynolds number values predict turbulent (inviscid) flow, where individual molecule motion is chaotic on a microscopic scale, and fluid velocities across the face of the flow profile are similar: It should be emphasized that this turbulence is microscopic in nature, and occurs even when the fluid flows through a piping system free of obstructions, rough surfaces, and/or sudden directional changes. At high Reynolds number values, turbulence simply happens. Other forms of turbulence, such as eddies and swirl are possible at high Reynolds numbers, but are caused by disturbances in the flow stream such as pipe elbows, tees, control valves, thermowells, and other irregular surfaces. The "micro-turbulence" naturally occurring at high Reynolds numbers will actually randomize such macroscopic (large-scale) motions if the fluid subsequently passes through a long enough length of straight pipe. Turbulent flow is actually the desired condition for many industrial processes. When different fluids must be mixed together, for example, laminar flow is a bad thing: only turbulent flow will guarantee thorough mixing. The same is true for convective heat exchange: in order for two fluids to effectively exchange heat energy within a heat exchanger, the flow must be turbulent so that molecules from all portions of the flow stream will come into contact with the exchanger walls. Many types of flowmeters require a condition called fully-developed turbulent flow, where the flow profile is relatively flat and the only turbulence is that existing on a microscopic scale. Large-scale disturbances in the flow profile such as eddies and swirl tend to negatively affect the measurement performance of many flowmeter designs. This is why such flowmeters usually require long lengths of "straight-run" piping both upstream and downstream: to give micro-turbulence the opportunity to randomize any large-scale motions and homogenize the velocity profile. A generally accepted rule-of-thumb is that Reynolds number values less than 2000 will probably be laminar, while values in excess of 10000 will probably be turbulent. There is no definite threshold value for all fluids and piping configurations, though. To illustrate, I will share with you some examples of Reynolds number thresholds for laminar versus turbulent flows given by various technical sources: of the Instrument Engineers' Handbook, Process Measurement and Analysis, Third Edition (pg. 105 – authors: R. Siev, J.B. Arant, B.G. Lipták) define Re \(<\) 2000 as "laminar" flow, Re \(>\) 10000 as "fully developed turbulent" flow, and any Reynolds number values between 2000 and 10000 as "transitional" flow. of the ISA Industrial Measurement Series – Flow (pg. 11) define "laminar" flow as Re \(<\) 2000, "turbulent" flow as Re \(>\) 4000, and any Reynolds values in between 2000 and 4000 as "transitional" flow. The section in the Standard Handbook of Engineering Calculations (pg. 1-202) defines "laminar" flow as Re \(<\) 2100, and "turbulent" flow as Re \(>\) 3000. In a later section of that same book ( – page 3-384), "laminar" flow is defined as Re \(<\) 1200 and "turbulent" flow as Re \(>\) 2500. Douglas Giancoli, in his physics textbook Physics (third edition, pg. 11), defines "laminar" flow as Re \(<\) 2000 and "turbulent" flow as Re \(>\) 2000. Finally, a source on the Internet (http://flow.netfirms.com/reynolds/theory.htm) attempts to define the threshold separating laminar from turbulent flow to an unprecedented degree of precision: Re \(<\) 2320 is supposedly the defining point of "laminar" flow, while Re \(>\) 2320 is supposedly marks the onset of "turbulent" flow. Clearly, Reynolds number alone is insufficient for consistent prediction of laminar or turbulent flow, otherwise we would find far greater consistency in the reported Reynolds number values for each regime. Pipe roughness, swirl, and other factors influence flow regime, making Reynolds number an approximate indicator only. It should be noted that laminar flow may be sustained at Reynolds numbers significantly in excess of 10000 under very special circumstances. For example, in certain coiled capillary tubes, laminar flow may be sustained all the way up to Re = 15000, due to a phenomenon known as the Dean effect! Law of Continuity Any fluid moving through a pipe obeys the Law of Continuity, which states that the product of average velocity (\(\overline{v}\)), pipe cross-sectional area (\(A\)), and fluid density (\(\rho\)) for a given flow stream must remain constant: [Law of Continuity] \[\rho_1 A_1 \overline{v_1} = \rho_2 A_2 \overline{v_2} = \cdots \rho_n A_n \overline{v_n}\] Fluid continuity is an expression of a more fundamental law of physics: the Conservation of Mass. If we assign appropriate units of measurement to the variables in the continuity equation, we see that the units cancel in such a way that only units of mass per unit time remain: \[\rho A \overline{v} = \left[\hbox{kg} \over \hbox{m}^3\right] \left[\hbox{m}^2 \over 1 \right] \left[\hbox{m} \over \hbox{s} \right] = \left[\hbox{kg} \over \hbox{s} \right]\] This means we may define the product \(\rho A \overline{v}\) as an expression of mass flow rate, or \(W\): \[W = \rho A \overline{v}\] In order for the product \(\rho A \overline{v}\) to differ between any two points in a pipe, mass would have to mysteriously appear and disappear. So long as the flow is continuous (not pulsing), and the pipe does not leak, it is impossible to have different rates of mass flow at different points along the flow path without violating the Law of Mass Conservation. The continuity principle for fluid through a pipe is analogous to the principle of current being the same everywhere in a series-connected electric circuit, and for equivalently the same reason. We refer to a flowing fluid as incompressible if its density does not substantially change with modest changes in pressure. For this limiting case, \(\rho\) is constant and the continuity equation simplifies to the following form: \[A_1 \overline{v_1} = A_2 \overline{v_2}\] Examining this equation in light of dimensional analysis, we see that the product \(A \overline{v}\) is also an expression of flow rate: \[A \overline{v} = \left[\hbox{m}^2 \over 1 \right] \left[\hbox{m} \over \hbox{s} \right] = \left[\hbox{m}^3 \over \hbox{s} \right]\] Cubic meters per second is an expression of volumetric flow rate, often symbolized by the variable \(Q\): \[Q = A \overline{v}\] The practical implication of this principle is that fluid velocity is inversely proportional to the cross-sectional area of a pipe. That is, fluid slows down when the pipe's diameter expands, and vice-versa. We readily see this principle manifest in the natural world: rivers run slowest where they are deep and wide, and run fastest where they are shallow and narrow. More specifically, we may say that the average velocity of a fluid through a pipe varies inversely with the square of the diameter, since cross-sectional area is proportional to the square of the pipe diameter. For example, if fluid flows at a velocity of 2 feet per second through a 12-inch pipe, and that pipe extends to a narrower section only 6 inches (half the diameter of the wide section), the velocity at the narrower section will be four times as great (8 feet per second), since the area of that skinnier section is one-quarter the area of the wider section. For example, consider a pipe with an inside diameter of 8 inches (2/3 of a foot), passing a liquid flow of 5 cubic feet per minute. The average velocity (\(v\)) of this fluid may be calculated as follows: \[\overline{v} = {Q \over A}\] Solving for \(A\) in units of square feet: \[A = \pi r^2\] \[A = \pi \left({1 \over 3} \hbox{ ft}\right)^2 = {\pi \over 9} \hbox{ ft}^2\] Now, solving for average velocity \(\overline{v}\): \[\overline{v} = {Q \over A} = {{5 \hbox{ ft}^3 \over \hbox{min}} \over {{\pi \over 9} \hbox{ ft}^2}}\] \[\overline{v} = \left({5 \hbox{ ft}^3 \over \hbox{min}}\right) \left({9 \over {\pi \hbox{ ft}^2}}\right)\] \[\overline{v} = {45 \hbox{ ft} \over \pi \hbox{ min}} = 14.32 {\hbox{ft} \over \hbox{min}}\] Thus, the average fluid velocity inside an 8-inch pipe passing a volumetric flow rate of 5 cubic feet per minute is 14.32 feet per minute. Viscous flow The pressure dropped by a slow-moving, viscous fluid through a pipe is described by the Hagen-Poiseuille equation. This equation applies only for conditions of low Reynolds number; i.e. when viscous forces are the dominant restraint to fluid motion through the pipe, and turbulence is nonexistent: \[Q = k \left({{\Delta P D^4} \over {\mu L}}\right)\] \(k\) = Unit conversion factor = 7.86 \(\times 10^5\) \(\Delta P\) = Pressure drop (inches of water column) \(D\) = Pipe diameter (inches) \(\mu\) = Liquid viscosity (centipoise) – this is a temperature-dependent variable! \(L\) = Length of pipe section (inches) Bernoulli's equation Bernoulli's equation is an expression of the Law of Energy Conservation for an inviscid (frictionless) fluid stream, named after Daniel Bernoulli. It states that the sum total energy at any point in a passive fluid stream (i.e. no pumps or other energy-imparting machines in the flow path, nor any energy-dissipating elements) must be constant. Two versions of the equation are shown here: \[z_1 \rho g + {v_1^2 \rho \over 2} + P_1 = z_2 \rho g + {v_2^2 \rho \over 2} + P_2\] \[z_1 + {v_1^2 \over {2 g}} + {P_1 \over \gamma} = z_2 + {v_2^2 \over {2 g}} + {P_2 \over \gamma}\] \(z\) = Height of fluid (from a common reference point, usually ground level) \(\rho\) = Mass density of fluid \(\gamma\) = Weight density of fluid (\(\gamma = \rho g\)) \(g\) = Acceleration of gravity \(v\) = Velocity of fluid \(P\) = Pressure of fluid Each of the three terms in Bernoulli's equation is an expression of a different kind of energy, commonly referred to as head: \[z \rho g \hbox{\hskip 20pt Elevation head}\] \[{v^2 \rho \over 2} \hbox{\hskip 20pt Velocity head}\] \[P \hbox{\hskip 20pt Pressure head}\] Elevation and Pressure heads are potential forms of energy, while Velocity head is a kinetic form of energy. Note how the elevation and velocity head terms so closely resemble the formulae for potential and kinetic energy of solid objects: \[E_p = mgh \hbox{\hskip 20pt Potential energy formula}\] \[E_k = {1 \over 2}mv^2 \hbox{\hskip 20pt Kinetic energy formula}\] The only real differences between the solid-object and fluid formulae for energies is the use of mass density (\(\rho\)) for fluids instead of mass (\(m\)) for solids, and the arbitrary use of the variable \(z\) for height instead of \(h\). In essence, the elevation and velocity head terms within Bernoulli's equation come from the assumption of individual fluid molecules behaving as miniscule solid masses. It is very important to maintain consistent units of measurement when using Bernoulli's equation! Each of the three energy terms (elevation, velocity, and pressure) must possess the exact same units if they are to add appropriately. Here is an example of dimensional analysis applied to the first version of Bernoulli's equation (using British units): \[z \rho g + {v^2 \rho \over 2} + P\] \[[\hbox{ft}] \left[\hbox{slug} \over \hbox{ft}^3\right] \left[\hbox{ft} \over \hbox{s}^2 \right] + \left[\hbox{ft} \over \hbox{s} \right]^2 \left[\hbox{slug} \over \hbox{ft}^3\right] + \left[\hbox{lb} \over \hbox{ft}^2\right] = \left[\hbox{slug} \over \hbox{ft} \cdot \hbox{s}^2 \right]\] As you can see, both the first and second terms of the equation (elevation and velocity heads) bear the same unit of slugs per foot-second squared after all the "feet" are canceled. The third term (pressure head) does not appear as though its units agree with the other two terms, until you realize that the unit definition of a "pound" is a slug of mass multiplied by the acceleration of gravity in feet per second squared, following Newton's Second Law of motion (\(F = ma\)): \[[\hbox{lb}] = [\hbox{slug}] \left[\hbox{ft} \over \hbox{s}^2\right]\] Once we make this substitution into the pressure head term, the units are revealed to be the same as the other two terms, slugs per foot-second squared: \[\left[\hbox{lb} \over \hbox{ft}^2\right] = \left[\hbox{slug} \left[\hbox{ft} \over \hbox{s}^2\right] \over \hbox{ft}^2\right] = \left[\hbox{slug} \over \hbox{ft} \cdot \hbox{s}^2 \right]\] In order for our British units to be consistent here, we must use feet for elevation, slugs per cubic foot for mass density, feet per second squared for acceleration, feet per second for velocity, and pounds per square foot for pressure. If one wished to use the more common pressure unit of PSI (pounds per square inch) with Bernoulli's equation instead of PSF (pounds per square foot), all the other units would have to change accordingly: elevation in inches, mass density in slugs per cubic inch, acceleration in inches per second squared, and velocity in inches per second. Just for fun, we can try dimensional analysis on the second version of Bernoulli's equation, this time using metric units: \[z + {v^2 \over {2 g}} + {P \over \gamma}\] \[[\hbox{m}] + \left[\left[\hbox{m} \over \hbox{s}\right]^2 \over \left[\hbox{m} \over \hbox{s}^2\right]\right] + \left[\left[\hbox{N} \over \hbox{m}^2 \right] \over \left[\hbox{N} \over \hbox{m}^3\right] \right] = [\hbox{m}]\] Here, we see that all three terms end up being cast in simple units of meters. That is, the fluid's elevation, velocity, and pressure heads are all expressed as simple elevations. In order for our metric units to be consistent here, we must use meters for elevation, meters per second for velocity, meters per second squared for acceleration, pascals (newtons per square meter) for pressure, and newtons per cubic meter for weight density. Applying Bernoulli's equation to real-life applications can be a bit daunting, as there are so many different units of measurement to contend with, and so many calculations which must be precise in order to arrive at a correct final answer. The following example serves to illustrate how Bernoulli's equation may be applied to the solution of pressure at a point in a water piping system, assuming no frictional losses anywhere in the system: We know without a doubt that Bernoulli's equation will be what we need to evaluate in order to solve for the unknown pressure \(P_2\), but where do we begin? A good place to start is by writing the equation we know we will need, then identifying all known values and all unknown values: Here is a list of known values, given to us already: Known quantity $z_1$ 0 ft (arbitrarily assigned as 0 height) $z_2$ 3 ft (if $z_1$ is 0 feet, then $z_2$ is 3 ft above it) $v_1$ 11 ft/s $P_1$ 46 PSI (\textit{need to convert into PSF so all units match}) $g$ 32.2. ft/s$^{2}$ The conversion for \(P_1\) from units of PSI into units of PSF is quite simple: multiply 46 PSI by 144 to get 6624 PSF. Here is a list of values unknown to us at this time: Unknown quantity $\rho$ (needs to be in units of slugs/ft$^{3}$) $v_2$ (needs to be in units of ft/s just like $v_1$) $P_2$ (the quantity we are ultimately solving for) Now all we must do is solve for \(\rho\) and \(v_2\), and we will be ready to use Bernoulli's equation to solve for \(P_2\). The important of identifying all the known and unknown quantities before beginning any calculations cannot be overstated. Doing so allows us to develop a plan for solving the problem. Without a plan, one has no idea of where or how to proceed, which is a condition many students repeatedly find themselves in when solving physics-type problems. We know that \(\rho\) is an expression of mass density for the fluid, and we were told the fluid in this example is water. Water has a maximum density of 62.4 pounds per cubic foot, but this figure is not usable in our chosen form of Bernoulli's equation because it is weight density (\(\gamma\)) and not mass density (\(\rho\)). The relationship between weight density \(\gamma\) and mass density \(\rho\) is the exact same relationship between weight (\(F_W\)) and mass (\(m\)) in a gravitational field (\(g\)). Newton's Second Law equation relating force to mass and acceleration (\(F = ma\)) works well to relate weight to mass and gravitational acceleration: \[F = ma\] \[F_W = mg\] Dividing both sides of this equation by volumetric units (\(V\)) (e.g. cubic feet) gives us our relationship between \(\gamma\) and \(\rho\): \[{F_W \over V} = {m \over V} g\] \[\gamma = \rho g\] Water has a weight density of 62.4 pounds per cubic foot in Earth gravity (32.2 feet per second squared), so: \[\rho = {\gamma \over g}\] \[\rho = {62.4 \hbox{ lb/ft}^3 \over 32.2 \hbox{ ft/s}^2} = 1.94 \hbox{ slugs/ft}^3\] Now we may calculate the total value for the left-hand side of Bernoulli's equation, representing the sum total of potential and kinetic heads for the fluid within the 10-inch pipe: \[z_1 \rho g + {v_1^2 \rho \over 2} + P_1 = \hbox{Total head at 10-inch pipe}\] Calculation at 10 inch pipe $z_1 \rho g$ (0 ft) (1.94 slugs/ft$^{3}$) (32.2 ft/s$^{2}$) 0 lb/ft$^{2}$ $v_1^2 \rho / 2$ (11 ft/s)$^{2}$ (1.94 slugs/ft$^{3}$) / 2 117.4 lb/ft$^{2}$ $P_1$ (46 lb/in$^{2}$) (144 in$^{2}$/1 ft$^{2}$) 6624 lb/ft$^{2}$ Total 0 lb/ft$^{2}$ + 117.4 lb/ft$^{2}$ + 6624 lb/ft$^{2}$ 6741.4 lb/ft$^{2$} Note the absolutely consistent use of units: all units of distance are feet. All units of mass as slugs. All units of time are seconds. Failure to maintain consistency of units will result in (often severely) incorrect results! There is one more unknown quantity to solve for before we may calculate values at the 6-inch pipe, and that unknown quantity is \(v_2\). We know that the Continuity equation gives us a mathematical relationship between volumetric flow (\(Q\)), pipe area (\(A\)), and velocity (\(v\)): \[Q = A_1 v_1 = A_2 v_2\] Looking at this equation, the only variable we know the value of at this point is \(v_1\), and we need to find \(v_2\). However, if we could find the values of \(A_1\) and \(A_2\), and/or \(Q\), we would have the information we need to solve for \(v_2\), which in turn would give us the information we would need to solve for \(P_2\) in Bernoulli's equation. One way to approach this problem is to express the areas and velocities as ratios, eliminating \(Q\) entirely so all we need to find are \(A_1\) and \(A_2\): \[{A_1 \over A_2} = {v_2 \over v_1}\] The area of a circular pipe is given by the basic equation \(A = \pi r^2\). Since the problem gives us each pipe's diameter (10 inches and 6 inches), we know the radii (5 inches and 3 inches, respectively) which we may then plug into our ratio equation: \[{\pi (5\hbox{ in})^2 \over \pi (3\hbox{ in})^2} = {v_2 \over v_1}\] \[{25 \over 9} = {v_2 \over v_1}\] Knowing \(v_1\) has a value of 11 feet per second, the solution for \(v_2\) is now quite simple: \[v_2 = 11 \hbox{ ft/s} \left({25 \over 9}\right)\] \[v_2 = (11 \hbox{ ft/s}) (2.778) = 30.56 \hbox{ ft/s}\] Finally, we have all the pieces necessary to solve for \(P_2\) in the right-hand side of Bernoulli's equation: \[z_2 \rho g + {v_2^2 \rho \over 2} + P_2 = \hbox{Total head at 6-inch pipe}\] Calculation at 6 inch pipe $z_2 \rho g$ (3 ft) (1.94 slugs/ft$^{3}$) (32.2 ft/s$^{2}$) 187.4 lb/ft$^{2}$ $v_2^2 \rho / 2$ (30.56 ft/s)$^{2}$ (1.94 slugs/ft$^{3}$) / 2 905.6 lb/ft$^{2}$ $P_2$ (unknown) Total 187.4 lb/ft$^{2}$ + 905.6 lb/ft$^{2}$ + $P_2$ 1093 lb/ft$^{2$} + $P_2$ Knowing that the total head calculated at the first location was 6741.4 lb/ft\(^{2}\), and the Conservation of Energy requires total heads at both locations be equal (assuming no energy lost to fluid friction along the way), \(P_2\) must be equal to: \[6741.4 \hbox{ lb/ft}^2 = 1093 \hbox{ lb/ft}^2 + P_2\] \[P_2 = 6741.4 \hbox{ lb/ft}^2 - 1093 \hbox{ lb/ft}^2 = 5648.3 \hbox{ lb/ft}^2\] Converting pounds per square foot into the more customary unit of pounds per square inch (PSI): \[P_2 = (5648.3 \hbox{ lb/ft}^2) \left({1 \hbox{ ft}^2 \over 144 \hbox{ in}^2}\right)\] \[P_2 = 39.2 \hbox{ lb/in}^2\] Before discussing the larger meaning of our solution, it would be good to review the problem-solving plan we followed to calculate \(P_2\): First, we identified Bernoulli's equation as being the central equation necessary for solving \(P_2\). Then, we identified all the known variables within Bernoulli's equation given to us in the problem, and also if there were any unit-conversion operations necessary. Next, we identified any unknown variables necessary to solve for \(P_2\) in Bernoulli's equation. For each of those unknown variables, we found or developed equations to solve for them, based on variables known to us. The graphic shown above illustrates our plan of solution, with arrows showing the dependent relationships where equations supplied values for unknown quantities in other equations. This is not just a problem-solving technique unique to Bernoulli's equation; it is a general strategy applicable to any type of problem where multiple equations must be used to solve for some quantity. The study of physics is general is filled with problems like this! Note how our calculated value for \(P_2\) at the second gauge is so much lower than the pressure at the first gauge: 39.2 PSI compared to 46 PSI. This represents nearly a 7 PSI decrease in pressure! Note also how little vertical distance separates the two gauges: only 3 feet. Clearly, the change in elevation between those two points in insufficient to account for the large loss in pressure. Given a 3 foot difference in elevation, one would expect a pressure reduction of about 1.3 PSI for a static column of water, but what we're seeing in this piping system is a pressure drop of nearly 7 PSI. The difference is due to an exchange of energy from potential to kinetic form, as the fluid enters a much narrower pipe (6 inches instead of 10) and must increase velocity. Furthermore, if we were to increase the flow rate discharged from the pump, resulting in even more velocity through the narrow pipe, pressure at \(P_2\) might even drop lower than atmospheric. In other words, Bernoulli's equation tells us we can actually produce a vacuum by accelerating a fluid through a constriction. This principle is widely used in industry with devices known as eductors or ejectors: tapered tubes through which fluid flows at high velocity to produce a vacuum at the throat. This, in fact, is how a carburetor works in an older automobile engine to vaporize liquid gasoline fuel into a stream of air drawn into the engine: the engine's intake air passes through a venturi tube, where vacuum at the throat of the venturi produces enough negative pressure to draw liquid gasoline into the stream to produce a fine mist. Ejectors use a high-velocity gas or vapor (e.g. superheated steam) to produce significant vacuums. Eductors use process liquid flow, such as the eductor shown in this next photograph where wastewater flow creates a vacuum to draw gaseous chlorine into the stream for biological disinfection: Here, the eductor helps fulfill an important safety function. By creating a vacuum to draw toxic chlorine gas from the supply tank into the water stream, the chlorine gas piping may be continuously maintained at a slightly negative pressure throughout. If ever a leak were to develop in the chlorine system, this vacuum would cause ambient air to enter the chlorine pipe rather than toxic chlorine gas to exit the pipe, making a leak far less dangerous than if the chlorine gas piping were maintained in a pressurized state. Torricelli's equation The velocity of a liquid stream exiting from a nozzle, pressured solely by a vertical column of that same liquid, is equal to the free-fall velocity of a solid mass dropped from the same height as the top of the liquid column. In both cases, potential energy (in the form of vertical height) converts to kinetic energy (motion): This was discovered by Evangelista Torricelli almost 100 years prior to Bernoulli's more comprehensive formulation. The velocity may be determined by solving for \(v\) after setting the potential and kinetic energy formulae equal to each other (since all potential energy at the upper height must translate into kinetic energy at the bottom, assuming no frictional losses): \[mgh = {1 \over 2}mv^2\] \[gh = {1 \over 2}v^2\] \[2gh = v^2\] \[v = \sqrt{2gh}\] Note how mass (\(m\)) simply disappears from the equation, neatly canceling on both sides. This means the nozzle velocity depends only on height, not the mass density of the liquid. It also means the velocity of the falling object depends only on height, not the mass of the object. Flow through a venturi tube If an incompressible fluid moves through a venturi tube (i.e. a tube purposefully built to be narrow in the middle), the continuity principle tells us the fluid velocity must increase through the narrow portion. This increase in velocity causes kinetic energy to increase at that point. If the tube is level, there will be negligible difference in elevation (\(z\)) between different points of the tube's centerline, which means elevation head remains constant. According to the Law of Energy Conservation, some other form of energy must decrease to account for the increase in kinetic energy. This other form is the pressure head, which decreases at the throat of the venturi: Ideally, the pressure downstream of the narrow throat should be the same as the pressure upstream, assuming equal pipe diameters upstream and down. However, in practice the downstream pressure gauge will show slightly less pressure than the upstream gauge due to some inevitable energy loss as the fluid passed through the venturi. Some of this loss is due to fluid friction against the walls of the tube, and some is due to viscous losses within the fluid driven by turbulent fluid motion at the high-velocity throat passage. The difference between upstream and downstream pressure is called permanent pressure loss, while the difference in pressure between the narrow throat and downstream is called pressure recovery. If we install vertical sight-tubes called piezometers along a horizontal venturi tube, the differences in pressure will be shown by the heights of liquid columns within the tubes. Here, we assume an ideal (inviscid) liquid with no permanent pressure loss: The height of liquid in each piezometer tube represents the amount of potential energy in the fluid at that point along the venturi tube. [Piezometer] We may gain more insight into the nature of energy in this moving fluid stream if we add three more piezometers, each one equipped with its own Pitot tube facing upstream to "catch" the velocity of the fluid. Rather than represent potential energy by liquid height as the straight-tube piezometers do, the Pitot tube piezometers represent the total energy (potential plus kinetic) of the fluid. As such, the liquid heights in these new piezometers are all equal to each other, showing that total energy is indeed conserved at every point in the system: \[z + {v^2 \over {2 g}} + {P \over \gamma} = \hbox{(constant)}\] Here, each of the "heads" represented in Bernoulli's equation are shown in relation to the different piezometer heights. The difference in liquid column height between each Pitot tube piezometer (potential + kinetic energy) and its corresponding straight-tube piezometer (potential energy alone) reflects the amount of kinetic energy possessed by the fluid stream at that point in the venturi tube. In a real venturi tube, there is some energy permanently lost in the moving fluid due to friction. Consequently the piezometer measurements in a real venturi tube would look something like this: The "energy line" is seen to slope downhill from inlet to outlet on the venturi tube, showing a degradation in total energy content from beginning to end.
CommonCrawl
Body shape index versus body mass index as correlates of health risk in young healthy sedentary men Marzena Malara1, Anna Kęska1, Joanna Tkaczyk1 & Grażyna Lutosławska1 Recently a new simply calculated index of body composition -a body shape index (ABSI) has been introduced as an index more reliable than BMI of association between body composition and all-cause mortality. However, until now associations between ABSI and metabolic risk factors have not been evaluated. A total of 114 male university students not engaged in any planned physical activity participated in the present study. Anthropometric measurements (weight, height, waist circumference) were recorded. Body mass index (BMI) was calculated from weight and height, body shape index (ABSI) was calculated from waist circumference, weight, height and BMI. Blood was withdrawn after an overnight fast from the antecubital vein. Triacylglycerols, total cholesterol and HDL-cholesterol levels in plasma were determined using colorimetric methods and Randox commercial kits. Plasma LDL-cholesterol concentrations were calculated according to the Friedewald formula. Circulating insulin was assayed using a standard radioimmunological method with monoclonal antibodies against insulin and BioSource commercial kits. BMI was slightly, but significantly correlated only with circulating TG (r=0.330, p < 0.001) In contrast, ABSI was slightly, but significantly correlated with plasma levels of insulin (r=0.360, p<0.001), TC (r=0.270, p<0.002), LDL-C and non-HDL-C (r=0.300, p<0.001). In participants at the upper quartile of BMI circulating TG was higher (by 50%, p<0.05) than in their counterparts at the lower BMI quartile. Subjects representing the upper quartile of ABSI were characterized by higher plasma levels of insulin, TC, LDL-C and non-HDL in comparison with subjects at the lower ABSI quartile. (by 92 %, 11. %, 29 % and 21 % respectively, p<0.001). ABSI, a new simply calculated index of body fat seems to more accurately depict the variability in circulating insulin and lipoproteins than BMI at least in young, healthy male subjects. At present obesity is recognized as the main cause of type 2 diabetes, cardiovascular disease and an important contributing factor in some cancers [1]. In consequence, precise obesity criteria and diagnosis are of special importance in medical practice. There is a wide range of methods for body fat determination, which are suitable in laboratory practice (BIA, DEXA, CT, and MRI); however, they require costly equipment, which is not always available [2-4]. Much simpler skinfold measurement are time-consuming and have to be performed by experienced technicians [5]. Thus, they are not suitable either for everyday medical practice or in population-based studies. According to WHO recommendations, the body mass index (BMI) calculated from body weight and height and waist circumference (WC) are a valid indicators of fatness and this assumption has been supported by many studies concerning their associations with health risk [6-8]. On the other hand, there are data questioning BMI reliability and indicating that it provides a false diagnosis of body fatness [9,10]. There are also data indicating that regional fat distribution, but not total body fat stores, are related to metabolic disturbances and health risks [11,12]. Furthermore, it has been demonstrated that WHO standards of BMI are not suitable for the evaluation of body fat with respect to ethnicity [13]. Similarly, many doubts exist with respect to associations between BMI and mortality. Assuming that BMI provides reliable information concerning body fatness and taking into account detrimental effects of fat excess on health and mortality, it is not clear why the BMI-mortality relationship is U-shaped, suggesting high mortality in both lean and obese humans [14-16]. Moreover, it is worth noting that in young healthy adults BMI, but also other surrogate indices of fatness (e.g. waist-to-height ratio, body adiposity index) provide poor prognosis of fat mass since they reflect mostly skeletal muscle mass [17,18]. In the literature there are many other simple surrogate indices of body fat such as the waist-to-height ratio (e.g. weight-to-height ratio-WtHR), conicity index - CI, body adiposity index -BAI), however, their validity in respect to BMI is still under debate [19-22]. Recently Krakauer and Krakauer [23] proposed a new simply calculated index of body composition (a body shape index - ABSI) as more reliable than BMI in determination of association between all-cause mortality and body composition. However, data concerning relationship of ABSI with health risk are controversial. ABSI was found to predict resting blood pressure in adolescents more precisely than BMI [24]. On the contrary, ABSI predictive ability was not better than BMI with respect to type 2 diabetes, hypertension and cardiovascular disease in Chinese and Iranian populations [25-27]. The reason for this discrepancy is unknown, however, it may be due to ethnic differences in body fat distribution [28]. In addition, it cannot be excluded that BMI reflecting mostly total body fat differs in its relationship to metabolic variables from ABSI, which encompasses waist circumference, thus at least partially depicts fat distribution [29]. Thus, this study was undertaken and aimed at the evaluation of the relationship between ABSI and BMI and biochemical variables contributing to health risk in sedentary young male adults. We studied a sample of 114 male university students recruited through word-of-mouth, and posters displayed at the university and in student dormitories. They were selected from 148 volunteers because they agreed to venous blood withdrawal under fasting conditions. All participants were healthy non-smokers not engaged in planned physical activity and not taking any medication on a regular basis. They were informed about procedures and all provided their written consent. The study protocol was accepted by the local ethics committee at the Jósef Piłsudski University of Physical Education. Anthropometric measurements Body mass was measured to the nearest 0.1 kg and body height to the nearest 0.5 cm using standard medical equipment in subjects wearing light indoor clothing without shoes, jackets and sweaters. Body mass index (BMI) was calculated as body mass (kg) divided by height (m) squared. The subjects' adiposity was classified according to WHO standards: underweight was defined as BMI < 18.5, normal weight as BMI ≥ 18.5 and <25, overweight as BMI ≥ 25 to BMI <30, and obesity as a BMI ≥30 [30]. Waist circumference (WC) was measured in the midway section between the lower edge of the ribs and the iliac crest with an accuracy of 0.1 cm using non-stretchable tape and values <102 cm were accepted as normal [30]. All measurements were performed twice but in case of divergent results were repeated for the third time. A Body Shape Index (ABSI) was calculated according to Krakauer and Krakauer [23] and the following formula: $$ \mathrm{ABSI} = \mathrm{W}\mathrm{C}\left(\mathrm{m}\right)/\left[{\mathrm{BMI}}^{2/3} \times \mathrm{height}\ {\left(\mathrm{m}\right)}^{1/2}\right] $$ Participants were asked to refrain from physical activity for 48 h before blood sampling. They were tested in the morning (8:00–8:30 a.m.) after an overnight fast. Venous blood was collected under aseptic conditions into plastic tubes containing anticoagulant and centrifuged at 4°C. Plasma was stored at -70° until analysis. Glucose was determined using the GOD-PAP method. Circulating glucose was classified according to the International Diabetes Federation with 5.5 mmol/l accepted as the upper level [31]. Triacylglycerols (TG), total cholesterol (TC), and HDL-cholesterol (HDL-C) were assayed colorimetrically. All variables were determined using commercial kits (Randox Laboratories, Great Britain). Coefficients of variation for these analyses did not exceed 5%. The plasma level of LDL-cholesterol (LDL-C) was calculated according to the Friedewald equation [32]. Non-HDL-cholesterol (non-HDL-C) was calculated by subtraction of HDL-C from TC [33]. Concentrations of plasma lipoproteins were classified according to recommendations of the European Atherosclerosis Society and European Guidelines on Cardiovascular Disease Prevention in Clinical Practice (TG <1.7 mmol/l, TC < 4.5 mmol/l, HLD-C >1.0 mmol/l, LDL-C < 2.5 mmol/l, non-HDL < 2.5 mmol/l [34,35]. Plasma insulin was assayed using a standard radioimmunoassay with monoclonal antibodies against insulin and commercial kits (BioSource, Belgium). The sensitivity of the method was 1 μIU/ml, intra and inter-assay coefficients of variation were 6.8% and 9.3%, respectively. All analyses were run in duplicate. Data are presented as mean ± SD. All variables were checked for normality using the Shapiro-Wilk test. The Pearson correlation coefficients were calculated for logarithmically (e-based) transformed data. Moreover, biochemical variables were interpreted with respect to lower and upper quartile of BMI and ABSI and the Mann–Whitney test was used for data comparison. A p value ≤ 0.05 was considered to be statistically significant. All calculations were carried out using the Statistica v.7 (Statsoft, Illinois, USA). Baseline characteristics of the participants are presented in Table 1. According to BMI standards 74.6% of participants were normal weight, 18.4% were overweight, and 7.0% were obese. Only 2.0% of our subjects were characterized by higher than recommended waist circumference. Circulating glucose and triacylglycerols were higher than normal in similar percentages of students (6.1%). Slightly more participants (8.8%) were characterized by lower than normal HDL-cholesterol. On the contrary, higher than normal TC, LDL-C and non-HDL-C were observed in 24.5%, 25.4% and 29.8% of participants, respectively. Table 1 Anthropometric characteristics and biochemical variables in young healthy men (means ± SD) BMI was slightly, but significantly correlated only with circulating TG (r = 0.330, p < 0.001). On the contrary, ABSI was slightly, but significantly correlated with plasma levels of insulin (r = 0.360, p < 0.001), TC (r = 0.270, p < 0.002), LDL-C and non-HDL-C (r = 0.300, p < 0.001) (data not shown). In participants at the upper BMI quartile circulating TG was higher (by 50%, p < 0.05) than in their counterparts at the lower BMI quartile (Table 2). Subjects representing the upper quartile of ABSI were characterized by higher plasma levels of insulin, TC, LDL-C and non-HDL (by 92%, 11. %, 29% and 21%, respectively, p < 0.001) in comparison with subjects at the lower ABSI quartile. Table 2 Biochemical variables in young healthy men according to lower and upper quartiles of BMI and ABSI The most important finding of our study concerns ABSI, which is better correlated to changes in circulating TC and insulin than BMI in young sedentary men. It is worth noting that the homogeneity of our participants according to age and sex strengthens our findings. Additionally, taking into account that disturbances in biochemical parameters (insulin, glucose, and lipoproteins) bring about health deteriorations, it could be tentatively postulated that ABSI may be of importance in risk prognosis of type 2 diabetes and/or atherogenesis [36]. On the other hand, it should be stressed that ABSI validity in the prognosis of cardiovascular disease (CVD) is far from being elucidated since Maessen et al. [37] have not found ABSI capable of determining the presence of this disease in middle-aged subjects. It should be stressed that in some way ABSI agrees with the WHO recommendation concerning waist circumference inclusion into health risk evaluation [30]. Similarly, other authors have suggested that both BMI and WC contribute to the prediction of body adiposity in white men and women [38]. The importance of WC measurements in diagnosis of health risk has been suggested by many authors since it has been postulated that WC provides indirect information about visceral fat accumulation [39,40]. At present it is well documented that visceral fat due to its location and metabolic characteristics contributes to distorted metabolism to a much greater extent than subcutaneous fat [41,42]. However, it is worth noting that mathematical correlations between metabolic variables and surrogate indices of fatness found in our study do not mean a direct cause-effect relationship. On the other hand, marked differences in metabolic profiles of subjects selected according to lower and upper quartiles of ABSI may suggest that ABSI, but not BMI, depicts variability in circulating insulin and lipoproteins in participants of our study mostly characterized by normal body fat according to BMI standards. Thus, it seems feasible that ABSI allows diagnosis of slight metabolic disturbances observed in otherwise healthy subjects [43,44]. In addition, assuming that lower and upper quartiles of BMI varied by 41% (28.2 versus 20.0) and ABSI quartiles differ by 11.6% (0.077 versus 0.069) it seems that even minor changes in ABSI provide information about variability in metabolic risk. However, more studies are needed to prove this hypothesis, because of limitations of our study: the low number of participants from one social group, living in a big city, and representing one sex and one ethnicity. Our study evaluated relationships between two surrogate measures of body composition – body mass index (BMI) and a body shape index (ABSI) with blood biochemical variables which contribute to health risk (glucose and lipoproteins). Participants classified according to lower and upper quartile of ABSI markedly differ with respect to circulating insulin, total cholesterol, LDL-cholesterol and non-HDL-cholesterol. On the contrary, participants classified according to lower and upper quartile of BMI slightly differ exclusively with respect to circulating triacylglycerols. Thus, in young and otherwise healthy sedentary men ABSI is a better predictor than BMI of variability in biochemical parameters, which may indicate disturbed metabolic processes. Camici M, Galetta F, Capri A: Obesity and increased risk for atherosclerosis and cancer. Int Med 2014, http://dx.doi.org/10.4172/2165-8048.1000154 Sun G, French CR, Martin GR, Younghusband B, Green RC, Xie Y, et al. Comparison of multifrequency bioelectrical impedance analysis with dual-X-ray absorptiometry for assessment of percentage body fat in a large, healthy population. Am J Clin Nutr. 2005;81:74–8. Camhi SM, Bray GA, Bouchard C, Greenway FI, Johnson WD, Newton RL, et al. The relationship of waist circumference and BMI to visceral, subcutaneous, and total body fat: sex and race differences. Obesity. 2011;19:402–8. Neamat-Allah J, Wald D, Hüsing A, Teucher B, Wendt A, Delorme S, et al. Validation of anthropometric indices of adiposity against whole-body magnetic resonance imaging – a study within the German European Prospective Investigation into Cancer and Nutrition (EPIC) Cohorts. PLoS ONE. 2014;9:e91586. Anthropometry procedure manual. www.cdc.gov/nchs/data/nhanes/nhanes_07_08/mnual_an.pfd Feller S, Boeing H, Pischon T. Body mass index, waist circumference, and the risk of type 2 diabetes mellitus. Dtsch Arztebl Int. 2010;107:470–6. Flint AJ, Rexrode KM, Hu FB, Glynn RJ, Caspard H, Manson JE, et al. Body mass index, waist circumference, and risk of coronary heart disease: a prospective study among men and women. Obes Res Clin Pract. 2010;4:e171–81. Feng R-N, Zhao C, Wang C, Niu Y-C, Li K, Guo F-C, et al. BMI is strongly associated with hypertension and waist circumference is strongly associated with type 2 diabetes and dyslipidemia, in northern Chinese adults. J Epidemiol. 2012;22:317–23. Heo M, Faith MS, Pietrobelli A, Heymsfield SB. Percentage of body fat cutoffs by sex, age, and race-ethnicity in the US adult population from NHAHES 1999–2004. Am J Clin Nutr. 2012;95:594–602. Shah NR, Braverman ER. Measuring adiposity in patients: the utility of body mass index (BMI), percent body fat, and leptin. PLOS ONE. 2012;7:e33308. doi:101371/journal.pone.0033308. Youn C-H, Bezerra HG, Wu T-H, Yang F-S, Liu C-C, Wu Y-J, et al. The normal limits, subclinical significance, related metabolic derangements and distinct biological effects of body site-specific adiposity in relatively healthy population. PLOS ONE. 2013;8:e61997. doi:101371/journal.pone.0061997. De Larochelliěre E, Côté J, Gilbert G, Bibeau K, Ross MK, Dion-Roy V, et al. Visceral/epicardial adiposity in non-obese and apparently healthy young adults: association with the cardiometabolic profile. Athrosclerosis. 2014;234:23–9. Nazare J-A, Smith JD, Borel A-L, Haffner SM, Baělkau B, Ross R, et al. Ethnic influences on the relations between abdominal subcutaneous and visceral adiposity, liver ft, and cardiometabolic risk profile: the international study of prediction of intra- abdominal adiposity and its relationship with cardiometabolic risk/intra-abdominal adiposity. Am J Clin Nutr. 2012;96:714–26. Odegaard AO, Pereira MA, Koh W-P, Gross MD, Duval S, Yu MC, et al. BMI, all-cause and cause-specific mortality in Chinese Singaporean men and women: the Singapore Chinese health study. PLOS ONE. 2010;5:e14000. doi:101371/journl.pone.0014000. Kokinos P, Myers J, Faselis C, Doumas M, Kheirbek R, Nylen E. BMI-mortality paradox and fitness in African American and Caucasian men with type 2 diabetes. Diabetes Care. 2012;35:1021–7. Chen Y, Copeland WK, Vendanthn R, Grant E, Lee JE, Gu D, et al. Association between body mass index and cardiovascular disease mortality in east Asians and south Asians: pooled analysis of prospective data from the Asia Cohort Consortium. BMJ. 2013;347:f5446. doi:10.1136/bmj.f5446. Zaccagni L, Barbieri D, Gualdi-Russo E. Body composition and physical activity in Italian university students. J Transl Med. 2014;12:120. http://www.translational-medicine.com/content/12/1/120. Heymsfield SB, Scherzer R, Piertobelli A, Lewis CE, Grunfeld C. Body mass index as a phenotyping expression of adiposity: quantitative contribution of muscularity in a population based sample. Int J Obes. 2009;33:1363–73. Ashwell M, Gunn P, Gibson S. Waist-to-height ratio is a better screening tool than waist circumference and BMI for adult cardiometabolic risk factors: systemic review and meta-analysis. Obes Rev. 2012;13:275–86. Taylor RW, Jones IE, Williams SM, Goulding A. Evaluation of waist circumference, waist-to-hip ratio and the conicity index as a screening tools for high trunk fat mass, as measured by dual-energy X-ray absorptiometry, in children aged 3–19 y. Am J Clin Nutr. 2000;72:490–5. Heymsfield SB, Heo M, Pietrobelli A. Are adult body circumferences associated with height? Relevance to normative ranges and circumferential indexes. Am J Clin Nutr. 2011;93:302–7. Lichtash CT, Ciu J, Guo X, Chen Y-DI, Hsueh WA, Rotter JI, et al. Body adiposity index versus body mass index and other anthropometric treats s correlates of cardiometabolic risk factors. PLOS ONE. 2013;8:e65954. doi:10.137/journal.pone.0065954. Krakauer NY, Krakauer JC. A new body shape index predicts mortality hazards independently of body mass index. PLoS ONE. 2012;7:e39504. doi:10.1371/journal.pone 0039504. Duncan MJ, Mota J, Vale S, Santos MP, Ribeiro JC. Associations between body mass index, waist circumference and body shape index with resting blood pressure in Portuguese adolescents. Am J Hum Biol. 2013;40:163–7. He S, Chen X. Could the new body shape index predict the new onset of diabetes mellitus in the Chinese population? PLoS ONE. 2013;8:e50573. doi:10.1371/journal.pone. 0050573. Cheung YB. "A body shape index" in middle-age and older Indonesian population: scaling exponents and association with incident hypertension. PLoS ONE. 2014;9:e85421. doi:10.1371/journal.pone.00 85421. Haghighatdoost F, Sarrafzadegan N, Mofammadifard N, Asgary S, Botsham M, Azadbakht L. Assessing body shape index as a risk predictor for cardiovascular disease and metabolic syndrome among Iranian adults. Nutrition. 2014;30:636–44. WHO Expert Consultation. Appropriate body-mass index for Asian populations and its implications for policy and intervention strategies. Lancet. 2004;363:157–163. Després J-P. Body fat distribution and risk of cardiovascular disease. Circulation. 2012;126:1301–13. World Health Organization (WHO). Waist circumference and waist-to-hip ratio: Report of a WHO Expert Consultation. Geneva: WHO; 2008. International Diabetes Federation. Global Guideline for type 2 diabetes. www.idf.org/sites/default/files/IDF%20T2DM%20Guideline.pdf Friedewald WT, Levy R, Fredricson D. Estimation of concentrations of low density lipoprotein concentrations without use of the preparative ultra-centrifugation. Clin Chem. 1972;18:499–504. Orakzai SH, Nasir K, Blaha M, Blumenthal RS, Raggi P. Non-HDL cholesterol is strongly associated with coronary artery calcification in asymptomatic individuals. Atherosclerosis. 2009;202:289–95. European Atherosclerosis Society Consensus Panel. Triglyceride-rich lipoproteins and high-density lipoprotein cholesterol in patients at high risk of cardiovascular disease: evidence and guidance for management. Eur Heart J. 2011;32:1345–61. Perk J. European Guidelines on cardiovascular disease prevention in clinical practice. Eur Heart J. 2012;33:1635–701. Raeven GM. Insulin resistance: the link between adiposity and cardiovascular disease. Med Clin North Am. 2011;95:875–92. Maessen MFH, Eijsfogels TMH, Verhaggen RJHM, Hopmn MTE, Verbeek ALM, de Vegt F. Entering a new era of body indices: the feasibility of a body shape index and body roundness index to identify cardiovascular health status. PLoS One. 2014;9:e107212. Janssen I, Heymsfield SB, Allison DB, Kotler DP, Ross R. Body mass index and waist circumference independently contribute to the prediction of nonabdominal, abdominal subcutaneous, and visceral fat. Am J Clin Nutr. 2002;75:683–8. Pouliot M-C, Desprěs J-P, Lemieux S, Moorjani S, Bouchard C, Tremblay A, et al. Waist circumference and abdominal sigittal diameter: best simple anthropometric indexes of abdominal visceral adipose tissue accumulation and related cardiovascular risk in men and women. Am J Cardiol. 1994;73:460–3. Lofgren I, Herron K, Zern T, West K, Patalay M, Shachter NS, et al. Waist circumference is a better predictor than body mass index of coronary hart disease risk in overweight premenopausal women. J Nutr. 2004;134:1071–6. Coral-Romero A, Sert-Kuniyoshi FH, Sierra-Johnston J, Orban M, Gami AQ, Davison D, et al. Modest visceral fat gain cause endothelial dysfunction in healthy humans. J Am Coll Cardiol. 2010;56:662–4. Liu J, Fox CS, Hickson DA, May WD, Hirstone KG, Carr JJ, et al. Impact of abdominal visceral and subcutaneous adipose tissue on cardiometabolic risk factors: the Jackson Heart Study. J Clin Endocrinol Metab. 2010;95:5419–26. Wildman RP, Muntner P, Reynolds K, McGinn AP, Rajpathak S, Rosetti Wylie J, et al. The obese without cardiometabolic risk factor clustering and the normal weight with cardiometabolic risk factor clustering. Arch Intern Med. 2008;168:1617–24. Sucurro E, Marini MA, Frontoni S, Hibal ML, Andreozzi F, Lauro R, et al. Insulin secretion in metabolically obese, but normal weight, and in metabolically healthy but obese individuals. Obesity. 2008;16:1881–6. We would like to thank all the students who volunteered for the study. Department of Biochemistry and Biology, Józef Pilsudski University of Physical Education, Box 55, 00-968, Warsaw, Poland Marzena Malara , Anna Kęska , Joanna Tkaczyk & Grażyna Lutosławska Search for Marzena Malara in: Search for Anna Kęska in: Search for Joanna Tkaczyk in: Search for Grażyna Lutosławska in: Correspondence to Grażyna Lutosławska. MM conceived the study, carried out biochemical measurements and drafted the manuscript. GL participated in the design of the study and performed data analysis. AK carried out anthropometric measurements and participated in the design of the study. JT participated in study coordination and helped to draft the manuscript. All authors read and approved the manuscript. Malara, M., Kęska, A., Tkaczyk, J. et al. Body shape index versus body mass index as correlates of health risk in young healthy sedentary men. J Transl Med 13, 75 (2015) doi:10.1186/s12967-015-0426-z DOI: https://doi.org/10.1186/s12967-015-0426-z Body shape index
CommonCrawl
All Formulas of Thermodynamics Chemistry Class 11, JEE, NEET 13 Comments / Formula Bank Here is the list of all formulas of Thermodynamics chemistry Class 11, JEE, NEET. Please go through all the formulas below. All Formulas of Thermodynamics Chemistry Class 11 Thermodynamic processes: Isothemal process: $\quad T =$ constant$ $\begin{array}{l} dT =0 \\ \Delta T =0 \end{array} Isochoric process: $V =$ constant$ $\begin{array}{l} d V=0 \\ \Delta V=0 \end{array} Isobaric process: $P =$ constant$ $\begin{array}{l} dP =0 \\ \Delta P =0 \end{array} Adiabatic process: q = 0 or heat exchange with the surrounding $=0$ (zero) IUPAC Sign convention about Heat and Work : Work done on the system = Positive Work done by the system = Negative $1^{\text {st }}$ Law of Thermodynamics $$ \Delta U=\left(U_{2}-U_{1}\right)=q+w $$ Law of equipartion of energy: $$ U =\frac{ f }{2} nRT \quad \text { (only for ideal gas) }$$ $$\Delta E =\frac{ f }{2} nR (\Delta T )$$ where $f=$ degrees of freedom for that gas. (Translational + Rotational) $f=3 \quad$ for monoatomic $f=5 \quad$ for diatomic or linear polyatmic $f=6 \quad$ for non – linear polyatmic Calculation of heat (q) : Total heat capacity: $C _{ T }=\frac{\Delta q }{\Delta T }=\frac{ dq }{ dT }= J /{ }^{\circ} C$ Molar heat capacity: $C =\frac{\Delta q }{ n \Delta T }=\frac{ dq }{ ndT }= J mole ^{-1} K ^{-1}$ $C _{ P }=\frac{\gamma R }{\gamma-1}$ $C _{ v }=\frac{ R }{\gamma-1}$ Specific heat capacity (s) : $S=\frac{\Delta q}{m \Delta T}=\frac{d q}{m d T}=J g m^{-1} K^{-1}$ WORK DONE (w) : Isothermal Reversible expansion/compression of an ideal gas : $$W =- nRT \ln \left( V _{ f } / V _{ i }\right)$$ Reversible and irreversible isochoric processes since $\quad d V=0$ So $\quad d W=-P_{\text {ext }} \cdot d V=0$ Reversible isobaric process: $$W=P\left(V_{f}-V_{p}\right)$$ Adiabatic reversible expansion : $\quad T _{2} V _{2}^{\gamma-1}= T _{1} V _{1}^{\gamma-1}$ Reversible Work: $$W =\frac{P_{2} V_{2}-P_{1} V_{1}}{\gamma-1}=\frac{\operatorname{nR}\left(T_{2}-T_{1}\right)}{\gamma-1}$$ Irreversible Work : $$W =\frac{P_{2} V_{2}-P_{1} V_{1}}{\gamma-1}=\frac{n R\left(T_{2}-T_{1}\right)}{\gamma-1} n C_{v}\left(T_{2}-T_{1}\right)=-P_{e x t}\left(V_{2}-V_{1}\right)$$ and use $$\frac{P_{1} V_{1}}{T_{1}}=\frac{P_{2} V_{2}}{T_{2}}$$ Free expansion – Always going to be irrerversible and since $P_{\text {ext }}=0$ If no. heat is supplied $q =0$ then $\Delta E =0$ $\begin{array}{ll}\text { S0 } & \Delta T =0\end{array}$ Application of Ist Law : $$\begin{aligned} \Delta U =\Delta Q +\Delta W & \Rightarrow \quad \Delta W =- P \Delta V \\ \therefore U =\Delta Q – P \Delta V \end{aligned}$$ Constant volume process Heat given at constant volume = change in internal energy $\therefore du =( dq )_{ v }$ $du = nC _{ v } d T$ $C _{ v }=\frac{1}{ n } \cdot \frac{ du }{ dT }=\frac{ f }{2} R$ Constant pressure process: $H \equiv$ Enthalpy (state function and extensive property) $$H=U+P V$$ $\Rightarrow C_{0}-C_{y}=R$ (only for ideal gas) Second Law Of Thermodynamics: $\Delta S_{\text {unverse }}=\Delta S_{\text {system }}+\Delta S_{\text {surrounding }}>0$ for a spontaneous process. Entropy (S): $$\Delta S_{\text {system }}=\int_{A}^{B} \frac{d q_{r e v}}{T}$$ Entropy calculation for an ideal gas undergoin a process: State $A \quad \frac{\text { irr }}{\Delta s_{\text {irr }}}$ State $B$ $P _{1}, V _{1}, T _{1} \quad P _{2}, V _{2}, T _{2}$ $\Delta S_{\text {system }}=n c_{v} \ln \frac{T_{2}}{T_{1}}+n R \ln \frac{V_{2}}{V_{1}} \quad$ (only for an ideal gas) Third Law Of Thermodynamics : The entropy of perfect crystals of all pure elements \& compounds is zero at the absolute zero of temperature. Gibb's free energy (G) : (State function and an extensive property) $$G _{\text {system }}= H _{\text {system }}- TS _{\text {system }}$$ Criteria of spontaneity: (i) If $\Delta G_{\text {system }}$ is $(-v e)<0 \Rightarrow$ process is spontaneous (ii) If $\Delta G_{\text {system }}$ is $>0$ $\Rightarrow$ process is non spontaneous (iii) If $\Delta G_{\text {system }}=0$ system is at equilibrium. Physical interpretation of $\Delta G$ : $\rightarrow$ The maximum amount of non-expansional (compression) work which can be performed. \Delta G = d w _{\text {non-exp }}= dH – TdS Standard Free Energy Change $\left(\Delta G^{\circ}\right):$ $\Delta G ^{\circ}=-2.303 RT \log _{10} K$ At equilibrium $\Delta G =0$. The decrease in free energy $(-\Delta G )$ is given as: -\Delta G = W _{\text {net }}=2.303 nRT \log _{10} \frac{ V _{2}}{ V _{1}} $\Delta G _{ f }^{\circ}$ for elemental state $=0$ $\Delta G _{ f }^{\circ}= G _{\text {products }}^{\circ}- G _{\text {Reactants }}^{\circ}$ Thermochemistry: Change in standard enthalpy $\Delta H ^{\circ}= H _{ m , 2}^{0}- H _{ m , 1}^{0}$ $=$ heat added at constant pressure. $= C _{ p } \Delta T$ If $\quad H _{\text {products }}> H _{\text {reactants }}$ Reaction should be endothermic as we have to give extra heat to reactants to get these converted into products and if $H _{\text {products }}< H _{\text {reactants }}$ Reaction will be exothermic as extra heat content of reactants will be released during the reaction. Enthalpy change of a reaction : \Delta H _{\text {reaction }}= H _{\text {products }}- H _{\text {reactants }} $\Delta H _{\text {reactions }}^{\circ}= H _{\text {products }}^{\circ}- H _{\text {reactants }}^{\circ}$ $\Delta H _{\text {reactions }}^{\circ}=$ positive $\quad-$ endothermic $\Delta H _{\text {reactions }}^{\circ}=$ negative exothermic Temperature Dependence Of $\Delta H$ : (Kirchoff's equation) : For a constant volume reaction $\Delta H _{2}^{\circ}=\Delta H _{1}^{\circ}+\Delta C _{ p }\left( T _{2}- T _{1}\right)$ where $\Delta C _{ p }= C _{ p }($ products $)- C _{ p }$ (reactants). $\Delta E _{2}^{0}=\Delta E _{1}^{0}+\int \Delta C _{ V } \cdot d T$ Enthalpy of Reaction from Enthalpies of Formation : The enthalpy of reaction can be calculated by $\Delta H _{ r }^{\circ}=\Sigma v _{ B } \Delta H _{ f }^{\circ},_{\text {products }}-\Sigma v _{ B } \Delta H _{ f }^{\circ},$ reactants $\quad v _{ B }$ is the stoichiometric coefficient. Estimation of Enthalpy of a reaction from bond Enthalpies: Resonance Energy: \begin{aligned} \Delta H _{\text {resonance }}^{\circ} &=\Delta H ^{\circ}{ }_{ f , \text { experimental }}-\Delta H _{ f , \text { calclulated }}^{\circ} \\ &=\Delta H ^{\circ}{ }_{ c , \text { calclulated }}^{\circ}-\Delta H ^{\circ}{ }_{ c , \text { experimental }} \end{aligned} This was the list of All Formulas of Thermodynamics Chemistry Class 11. You can get complete formula bank here. Class 11 Chemistry Formulas Chemical Bonding and Molecular Structure Gaseous State Ionic Equilibrium P Block S Block all formulas of thermodynamics chemistry class 11formulas 13 thoughts on "All Formulas of Thermodynamics Chemistry Class 11, JEE, NEET" All laws of thermo dynamic with mathematically represent Worst website, can't find formulas Thanks so so so much Navin Most helful ishan its gud its gud just a little more simplification required 🙂 Susmit Gupta Surely the best available formula material on thermodynamics Mayi Gaston Hello thanks to the owner of this article please I will like if the pdf for this article is made
CommonCrawl
ratio and proportion meaning in urdu Where To Get Hearing Aid Batteries During Lockdown, Park Plus Puzzle System, Orbea Alma H50 2020, Brands That Work With Micro Influencers, Funny Presentation Topics Reddit, Fear Of Cockroaches, Esi Hospital Kk Nagar Doctors List, Dance Monkey Saxophone Street, Rock On Pat Cadigan, Buffet Clarinets For Sale, 0. Here you will find most important Mcqs of Mathematics from Basic to Advance. And here we're going to say the ratio of oranges to apples, so we've swapped these 2. See more. The study was undertaken from the year 2007 to 2011. Try this amazing Math Quiz: Ratio And Proportion Practice Paper Questions! For instance if one package of cookie mix results in 20 cookies than that would be the same as to say that two packages will result in 40 cookies. Ratio defines the quantitative relation between two amounts, representing the number of time one value contains the other. Profitability ratios are metrics that assess a company's ability to generate income relative to its revenue, operating costs, balance sheet assets, or shareholders' equity. ... Find the mean proportion between 5 and 45. Solved examples with detailed answer description, explanation are given and it would be easy to understand. This page provides all possible translations of the word of biblical proportions in the Urdu language. A proportion on the other hand is an equation that says that two ratios are equivalent. Curtailed Of Its Fair Proportions meaning in Urdu: تخفیف کرنا کا اس کا بے عیب تعداد - meaning, Definition Synonyms at English to Urdu dictionary gives you the best and accurate Urdu translation and meanings of Curtailed Of Its Fair Proportions, Meaning. Write the ratio of the number of girls to total students. The definition of ratio and proportion is described here in this section. Example: In a class of 20 students, 12 are girls. Thus, the sample proportion is defined as p = x/n. The ratio of oranges to apples. Sampling Distribution of Proportion Definition: The Sampling Distribution of Proportion measures the proportion of success, i.e. Also explore over 24 similar quizzes in this category. The ratio 1 : 2 is read as "1 to 2." Proportions Meaning in Urdu – Utilize the online English to Urdu dictionary to check the Urdu meaning of English word. Both concepts are an important part of Mathematics. Knowing that the ratio does not change allows you to form an equation to find the value of an unknown variable. The mathematics behind the golden ratio is heavily connected to the Fibonacci Sequence. Here we have 9 oranges for every 6 apples. 0. Proportions meaning in Arabic has been searched 1178 times till 26 Nov, 2020. Upon calculating the profitability ratios, it was seen that the Gross Margin of the company increased steadily since 2007. 1:21. So we're going to swap the numbers. 15 C. 25 D. 50 A part-to-part ratio states the proportion of the parts in relation to each other. In real life also, you may find a lot of examples such as the rate of speed (distance/time) or price (rupees/meter) of a material, etc, where the concept of the ratio is highlighted. Financial ratios are created with the use of numerical values taken from financial statements Three Financial Statements The three financial statements are the income statement, the balance sheet, and the statement of cash flows. So we could say the ratio is going to be 9 to 6. Understanding Gearing Ratios . Proportion definition, comparative relation between things or magnitudes as to size, quantity, number, etc. The Fibonacci Sequence. Meaning of Proportion in Urdu. Ratio and Proportion. Watch Queue Queue. This is an Urdu translation of the Khan Academy lecture Introduction to Ratios (New HD Version) from their Algebra Playlist. cocoa butter, oil) and finely powdered sugar to produce a solid edible product. This means of the whole of 3, there is a part worth 1 and another part worth 2. 9 B. Displaying top 8 worksheets found for - Ratios Rates And Unit Rates Answer Key. In practice, a ratio is most useful when used to set up a proportion — that is, an equation involving two ratios. Ratios and units of measurement Get 3 of 4 questions to level up! ; ratio. The golden ratio or divine proportion is a visual representation of the golden number Phi (Φ) which is approximately 1.618. Typically, a proportion looks like a word equation, as follows: For example, suppose you know that both you and your friend Andrew brought the same proportion of scarves to caps. Debt Ratio: The debt ratio is a financial ratio that measures the extent of a company's leverage. Key ratios are the main mathematical ratios that illustrate and summarize the current financial condition of a company. quiz which has been attempted 8726 times by avid quiz takers. Watch Queue Queue A. This definition has affinities with Dedekind cuts as, with n and q both positive, np stands to mq as p / q stands to the rational number m / n (dividing both terms by nq). URDU Mcqs; Ratio and Proportion. ... 6. give pleasant proportions to. This is the aptitude questions and answers section on "Ratio and Proportion" with explanation for various interview, competitive examination and entrance test. Gearing ratios have more meaning when they are compared against the gearing ratios of other companies in the same industry. Ratios included liquidity ratios, solvency ratios, turnover ratios and profitability ratios for ABC Ltd. بائبل کے تناسب کی Urdu Discuss this of biblical proportions English translation with the community: PROPORTION MEANING IN URDU. Gearing Ratio . Ratio, Rate, Proportion, Quantitative NTS HEC, GAT NAT HAT [Urdu / Hindi] . Definition 6 says that quantities that have the same ratio are proportional or in proportion. Just like these examples show, you can use ratios and proportions in a similar manner to help you solve problems. Ratios can be written in 3 different ways. No matter how a ratio is written, it is important that it be simplified down to the smallest whole numbers possible, just as with any fraction. Notice, up here we said apples to oranges which is 6 to 9 or 2 to 3 if we reduce them. The sum of the parts makes up the whole. Using Ratios and Proportions. On the contrary, Proportion is used to find out the quantity of one category over the total, like the proportion of men out of total people living in the city. People often want to translate English words or phrases into Urdu. Chocolate Meaning in English Chocolate is a name given to products that are derived from cocoa which are then mixed with some sort of fat (e.g. When a reply on twitter gets more likes than the tweet it replied to As a Fraction Using a Colon : Using the word "To" 12:20 12 to 20 Ratios can be reduced. Part-part-whole ratios Get 3 of 4 questions to level up! These three core statements are intricately to gain meaningful information about a company. Some of the worksheets for this concept are Unit rate work and answer key, Geneseo central school home, Ratios and unit rates work answer key rate this, P 7 unit rates answer key pdf epub ebook, P7 unit rates answer key, Answer for unit rates, Ratios rates unit rates, Ace your math test reproducible work. This video is unavailable. This would mean that x and y will either increase together or decrease together by an amount that would not change the ratio. What is the ratio of Faheem's salary to Imran's salary to Naveed's salary if Faheem makes 80,000 rupees ,Imran 70,000 rupees and Naveed makes 50,000 rupees ? Meaning and Translation of Proportion in Urdu Script and Roman Urdu with Reference and Related Words. chances by the sample size 'n'. Synonyms. Ratios on coordinate plane Get 3 of 4 questions to level up! a chance of occurrence of certain events, by dividing the number of successes i.e. To convert a part-to-part ratio to fractions: Proportions Meaning in Urdu. more. Ratios, Rates & Unit Rates, & Proportions Packet RATIOS A Ratio is a comparison of two quantities. The definitions of the word Proportions has been described here with maximum details, and also fined different synonyms for the word Proportions. Simplifying Ratios . Example: If two pencils cost … $$\frac{20}{1}=\frac{40}{2}$$ A proportion is read as "x is to y as z is to w" You have searched the English word "Proportions" which meaning "الأبعاد" in Arabic. This can be done by finding the greatest common factor between the numbers and dividing them accordingly. What are Financial Ratios? ratio and proportion| نسبت اور تناسب |ex 4.5 | 9th maths urdu medium The numbers found on a company's … Explore Urdupoint dictionary to find out more meanings, definitions, synonyms and antonyms of the word Proportions. Key Ratio Definition. Conversely, Proportion is that part that that explains the comparative relation with the entire part. Your search Proportion meaning in Urdu found (4) Urdu meanings, (30) Synonyms, (1) Antonyms (10) Related Words Price-to-Earnings Ratio – P/E Ratio. some of the most important sub categories are: Average, Percentage, Problem on Ages, Time and Distance, HCF and LCM, Logarithms, Discount, Interest, Ratio & Proportion, Decimal Fraction and other. Proportions Meaning in Hindi is . With a ratio comparing 12 to 16, for example, you see that both 12 and 16 can be divided by 4. To 9 or 2 to 3 if we reduce them Related words help you solve problems undertaken from the 2007! ( New HD Version ) from their Algebra Playlist a chance of occurrence of events! Sugar to produce a solid edible product the Urdu meaning of English word total.... Of successes i.e dividing the number of successes i.e Paper questions Packet ratios a ratio a. Urdu language this can be divided by 4 the year 2007 to.... Rates, & proportions Packet ratios a ratio is a visual representation the! Word of biblical proportions in a class of 20 students, 12 are girls relation between two amounts, the. Year 2007 to 2011 HAT [ Urdu / Hindi ] you to form an equation that that. To be 9 to 6 entire part similar quizzes in this category statements are to. Success, i.e class of 20 students, 12 are girls 50 Displaying top worksheets! This means of the whole of 3, there is a financial ratio that measures the extent of a.... Part worth 1 and another part worth 1 and another part worth 2., definitions, synonyms and of. Finding the greatest common factor between the numbers and dividing them accordingly 3, is! Ratio is heavily connected to the Fibonacci Sequence similar quizzes in this category ratios can be done finding! Of English word solvency ratios, solvency ratios, turnover ratios and proportions in class. And 16 can be divided by 4 Urdu meaning of English word the proportion the...: in a similar manner to help you solve problems same industry comparison of two quantities two,... 12 and 16 can be reduced quiz takers ' s leverage one contains! Solvency ratios, it was seen that the Gross Margin of the word proportions from the year to! Definitions of the whole, i.e turnover ratios and units of measurement Get 3 4... Financial condition of a company oranges to apples, so we 've swapped these 2 ''... Is heavily connected to the Fibonacci Sequence 6 says that two ratios Rates Key. That says that two ratios are equivalent other hand is an equation involving two are., oil ) and finely powdered sugar to produce a solid edible product of an unknown variable ratios ABC! By avid quiz takers 9 oranges for every 6 apples is going to be 9 6. 25 D. 50 Displaying top 8 worksheets found for - ratios Rates and Unit Rates, & Packet! To level up: ratio and proportion practice Paper questions that ratio and proportion meaning in urdu that two ratios are equivalent Rate,,! Be done by finding the greatest common factor between the numbers and dividing them accordingly a! Each other number Phi ( Φ ) which is approximately 1.618 we could the... Equation involving two ratios are equivalent to size, quantity, number, etc that two ratios the. Worksheets found for - ratios Rates and Unit Rates, & proportions Packet ratios a ratio comparing 12 20. Find most important Mcqs of Mathematics from Basic to Advance ratios are equivalent provides!... find the mean proportion between 5 and 45 with the entire part you use... Success, i.e amazing Math quiz: ratio and proportion practice Paper questions is most when! Not change allows you to form an equation to find the mean proportion between 5 and 45 used to up. Ratio of oranges to apples, so we could say the ratio does not change allows you form. A proportion on the other translations of the word proportions equation to the. Unknown variable Urdu dictionary to check the Urdu language definition: the sampling of. Like these examples show, you see that both 12 and 16 can be reduced NAT! Events, by dividing the number of time one value contains the other certain events, by dividing number! The extent of a company ' s leverage edible product, proportion is that part that that the. Debt ratio is going to be 9 to 6 main mathematical ratios illustrate... ratio and proportion meaning in urdu 2021
CommonCrawl
Politics and International Relations (5) Du Bois Review: Social Science Research on Race (5) Compositio Mathematica (3) Comparative Studies in Society and History (1) LIVING WITHIN THE VEIL:: How Black Mothers with Daughters Attending Predominantly White Schools Experience Racial Battle Fatigue When Combating Racial Microaggressions Chasity Bailey-Fakhoury, Donald Mitchell Journal: Du Bois Review: Social Science Research on Race / Volume 15 / Issue 2 / Fall 2018 Print publication: Fall 2018 Using data from a mixed methods study with suburban Detroit, middle-class mothers as participants, we explore the relationship between racial microaggressions and the racial battle fatigue experienced by Black mothers with young daughters attending predominantly White schools. We find that Black mothers are regularly subjected to racial microaggressions by the White teachers, administrators, and parents with whom they interact. When experiencing slights, insults, and indignities, mothers report taking direct action—borne from African American motherwork—to combat the racial microaggressions. In the context of predominantly White schools, Black mothers enact aesthetic presence, maintain a visible presence, and are strategic in their interactions with school personnel. Racial battle fatigue is evident as they experience and combat racial microaggressions. To extend understanding of racial microaggressions, we apply the sociological concept of the Du Boisian Veil to our analysis. We discuss how the Veil—a barrier which protects the Black psyche by grounding the racialized self while simultaneously precluding racial equality by sustaining racial oppression—can induce the racial battle fatigue that is manifested when one is deluged by racial microaggressions. The Flying Newspapermen and the Time-Space of Late Colonial Nigeria Leslie James Journal: Comparative Studies in Society and History / Volume 60 / Issue 3 / July 2018 Recent scholarship on Indian, African, and Caribbean political thinkers and leaders emphasizes the era leading up to and immediately after decolonization as one saturated with awareness of time and history. While much of this scholarship focuses on temporalities that open up the future, this article instead foregrounds imaginings of the present in the currency of news reports. By examining newspaper reports, we can attend in a different way to renderings of time and freedom. This article applies theoretical work on genre and addressivity to analyze how location, space, and time were simultaneously grounded and overcome by Nigerian newspaper columnists, and how this dynamic of bounded transcendence facilitated an array of social and political projects in the time-space of 1930s and 1940s colonial Nigeria. The pseudonymous writers examined in this article applied the trope of flying to exist in an alternate reality. Each "reporter" outstripped the normal logic of time and space through their ability to "jump" from place to place, and even to be in more than one place at once. By existing, as they claimed, "everywhere and nowhere" they literally and figuratively rose above the material reality of the everyday, thus ordaining an exclusive capacity for revelation. Local cohomology of Du Bois singularities and applications to families Linquan Ma, Karl Schwede, Kazuma Shimomoto Journal: Compositio Mathematica / Volume 153 / Issue 10 / October 2017 Published online by Cambridge University Press: 27 July 2017, pp. 2147-2170 In this paper we study the local cohomology modules of Du Bois singularities. Let $(R,\mathfrak{m})$ be a local ring; we prove that if $R_{\text{red}}$ is Du Bois, then $H_{\mathfrak{m}}^{i}(R)\rightarrow H_{\mathfrak{m}}^{i}(R_{\text{red}})$ is surjective for every $i$ . We find many applications of this result. For example, we answer a question of Kovács and Schwede [Inversion of adjunction for rational and Du Bois pairs, Algebra Number Theory 10 (2016), 969–1000; MR 3531359] on the Cohen–Macaulay property of Du Bois singularities. We obtain results on the injectivity of $\operatorname{Ext}$ that provide substantial partial answers to questions in Eisenbud et al. [Cohomology on toric varieties and local cohomology with monomial supports, J. Symbolic Comput. 29 (2000), 583–600] in characteristic $0$ . These results can also be viewed as generalizations of the Kodaira vanishing theorem for Cohen–Macaulay Du Bois varieties. We prove results on the set-theoretic Cohen–Macaulayness of the defining ideal of Du Bois singularities, which are characteristic- $0$ analogs and generalizations of results of Singh–Walther and answer some of their questions in Singh and Walther [On the arithmetic rank of certain Segre products, in Commutative algebra and algebraic geometry, Contemporary Mathematics, vol. 390 (American Mathematical Society, Providence, RI, 2005), 147–155]. We extend results on the relation between Koszul cohomology and local cohomology for $F$ -injective and Du Bois singularities first shown in Hochster and Roberts [The purity of the Frobenius and local cohomology, Adv. Math. 21 (1976), 117–172; MR 0417172 (54 #5230)]. We also prove that singularities of dense $F$ -injective type deform. THE PHILADELPHIA NEGRO AND THE CANON OF CLASSICAL URBAN THEORY Kevin Loughran This paper outlines the urban theory of W. E. B. Du Bois as presented in the classic sociological text The Philadelphia Negro. I argue that Du Bois's urban theory, which focused on how the socially-constructed racial hierarchy of the United States was shaping the material conditions of industrial cities, prefigured important later work and offered a sociologically richer understanding of urban processes than the canonized classical urban theorists—Weber, Simmel, and Park. I focus on two key areas of Du Bois's urban theory: (1) racial stratification as a fundamental feature of the modern city and (2) urbanization and urban migration. While The Philadelphia Negro has gained recent praise for Du Bois's methodological achievements, I use extensive passages from the work to demonstrate the theoretical importance of The Philadelphia Negro and to argue that this groundbreaking work should be considered canonical urban theory. SOCIOLOGY AND THE THEORY OF DOUBLE CONSCIOUSNESS: W. E. B. Du Bois's Phenomenology of Racialized Subjectivity José Itzigsohn, Karida Brown In this paper we emphasize W. E. B. Du Bois's relevance as a sociological theorist, an aspect of his work that has not received the attention it deserves. We focus specifically on the significance of Du Bois's theory of Double Consciousness. This theory argues that in a racialized society there is no true communication or recognition between the racializing and the racialized. Furthermore, Du Bois's theory of Double Consciousness puts racialization at the center of the analysis of self-formation, linking the macro structure of the racialized world with the lived experiences of racialized subjects. We develop our argument in two stages: The first section locates the theory of Double Consciousness within the field of classical sociological theories of the self. We show how the theory addresses gaps in the theorizing of self-formation of James, Mead, and Cooley. The second section presents an analysis of how Du Bois deploys this theory in his phenomenological analysis of the African American experience. The conclusions point out how the theory of Double Consciousness is relevant to contemporary debates in sociological theory. W. E. B. DU BOIS'S CONTRIBUTIONS TO U.S. ECONOMICS (1893–1910) Robert E. Prasch Journal: Du Bois Review: Social Science Research on Race / Volume 5 / Issue 2 / Fall 2008 As a graduate student, Du Bois studied with two of the most important figures within what is today remembered as the German historical school of economics—Gustav Schmoller and Adolf Wagner. By taking seriously Du Bois's early ambitions in the field of economics, and rereading his early work as a social scientist in the context of early twentieth-century economic thought, the following article makes the case that Du Bois should be credited with having made several important contributions to U.S. economics. The article suggests that our failure to remember Du Bois as an economist is a joint consequence of two independent causes. The first is the racist attitudes of the U.S. academy of his time that simply would not accept a highly qualified African American as a colleague. The second is the sweeping changes that have so profoundly modified the method, form, and substance of U.S. economics over the past century. A simple characterization of Du Bois singularities Karl Schwede Journal: Compositio Mathematica / Volume 143 / Issue 4 / July 2007 We prove the following theorem characterizing Du Bois singularities. Suppose that $Y$ is smooth and that $X$ is a reduced closed subscheme. Let $\pi : \tilde{Y} \rightarrow Y$ be a log resolution of $X$ in $Y$ that is an isomorphism outside of $X$. If $E$ is the reduced pre-image of $X$ in $\tilde{Y}$, then $X$ has Du Bois singularities if and only if the natural map $\mathcal{O}_X \rightarrow R \pi_* \mathcal{O}_E$ is a quasi-isomorphism. We also deduce Kollár's conjecture that log canonical singularities are Du Bois in the special case of a local complete intersection and prove other results related to adjunction. W. E. B. DU BOIS BETWEEN WORLDS: Berlin, Empirical Social Research, and the Race Question Barrington S. Edwards Journal: Du Bois Review: Social Science Research on Race / Volume 3 / Issue 2 / September 2006 W. E. B. Du Bois once remarked that "It was in Germany that my first awakening to social reform began" (Aptheker 1982, p. 275). This essay examines the intellectual impact of Du Bois's voyage to Berlin from 1892 to 1894. His acquisition of empirical social research methods under the tutelage of German historical economists, particularly Gustav von Schmoller, armed him with the intellectual and methodological tools he needed in his effort to attack pervasive biological determinist theories in the United States. Using empirical techniques grounded in a "system of ethics" as his conceptual guideposts, Du Bois analyzed race as not a biological but a social phenomenon. To be sure, this was no easy task. He was also a "race man," who, loyal to his moral commitment to building a program of racial uplift, sought legitimacy as a social scientist from his fellow American sociologists, who summarily ignored his work. Struggling to serve his roles as both scientist and race man, Du Bois functioned within an intellectual space of double consciousness, constantly vacillating between two communities and two voices. In the end, Du Bois would successfully bring the scientific method to the race question, using inductive methods in his sociological studies on the African American experience. This approach made possible his seminal work The Philadelphia Negro (1899) and, subsequently, his research at Atlanta University. In the final analysis, his empirical research stood its ground, successfully advancing a new paradigm for examining race based on the idea that the putative racial hierarchy, theoretically grounded in biological determinism, was nothing more than a social artifact buttressed by racism. He further demonstrated how the economic disparity between Whites and Blacks functioned as the prime catalyst for "a plexus of social problems" that plagued African American life (Du Bois 1898, p. 14). His conclusions were a radical departure from the work of many American sociologists, many of whom were sympathetic to social Darwinism—his most formidable challenge.This essay would not have been conceived without the steady support and constant pushing of my dear friends and colleagues, all of whom have read or heard this paper at various stages: Anne Harrington, Robert Brain, Stephen C. Ferguson, Fanon Che Wilkins, Craig Koslofsky, Matti Bunzl, Peter Fritzsche, Frederick Hoxie, Dianne Pinderhughes, Jason E. Glenn, Bernadette Atuahene, Adam Biggs, Charlton Copeland, and Bikila Ochoa. Rational, Log Canonical, Du Bois Singularities II: Kodaira Vanishing and Small Deformations Sándor J. Kovács Journal: Compositio Mathematica / Volume 121 / Issue 3 / May 2000 Kollár's conjecture, that log canonical singularities are Du Bois, is proved in the case of Cohen–Macaulay 3-folds. This in turn is used to derive Kodaira vanishing for this class of varieties. Finally it is proved that small deformations of Du Bois singularities are again Du Bois.
CommonCrawl
Category Archives: Politics How do we decide how many representatives there are for each state? by David Lowry-Duda Posted on April 3, 2019 The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have. But what does this really mean? If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative. This doesn't give clarity.1 In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto. The Apportionment Act of 1792 According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.2 When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5. Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives3 State ideal rounded_down Vermont 2.851 2 NewHampshire 4.727 4 Maine 3.218 3 Massachusetts 12.62 12 RhodeIsland 2.281 2 Connecticut 7.894 7 NewYork 11.05 11 NewJersey 5.985 5 Pennsylvania 14.42 14 Delaware 1.851 1 Maryland 9.283 9 Virginia 21.01 21 Kentucky 2.290 2 NorthCarolina 11.78 11 SouthCarolina 6.874 6 Georgia 2.361 2 But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional "ideal" parts: New Jersey (0.985) Connecticut (0.894) South Carolina (0.874) Vermont (0.851) Delaware (0.851) Massachusetts+Maine (0.838) North Carolina (0.78) New Hampshire (0.727) (Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion? Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)? There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, Is it not unfair that the fractional apportionment favours the North?4 Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto. Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the Jefferson Method of Apportionment, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision. As an aside, it's interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.5 In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament. Measuring the fairness of an apportionment method At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair? We'll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.6 So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents. For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander7 The number of people each representative actually represent is at the core of the notion of fairness — but even then, it's not obvious. Suppose we enumerate the states, so that Si refers to state i. We'll also denote by Pi the population of state i, and we'll let Ri denote the number of representatives allotted to state i. In the ideal scenario, every representative would represent the exact same number of people. That is, we would have $$\text{pop. per rep. in state i} = \frac{P_i}{R_i} = \frac{P_j}{R_j} = \text{pop. per rep. in state j}$$ for every pair of states i and j. But this won't ever happen in practice. Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If \frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1} then we can say that each representative in state i represents more people, and thus those people have a diluted vote. Amounts of Inequality There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class. A few natural ideas emerge: We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$. We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$. We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation. This last one needs a bit of explanation. Define the relative difference between two numbers x and y to be \frac{\lvert x – y \rvert}{\min(x, y)}. Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state j have smaller constituencies than in state i (and therefore people in state j have more powerful votes). Then the relative difference in constituency size is \frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1. The relative difference in per capita representation is \frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 = \frac{P_i/R_i}{P_j/R_j} – 1. Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same. All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson's scheme — though to be fair, Jefferson's scheme doesn't seek to minimize inequality and there is no reason to think it should behave the same). Each of these ideas leads to a different apportionment scheme, and in fact each has a name. Minimizing differences in constituency size is the Dean method. Minimizing differences in per capita representation is the Webster method. Minimizing relative differences between both constituency size and per capita representation is the Hill (or sometimes Huntington-Hill) method. Further, each of these schemes has been used at some time in US history. Webster's method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.8 The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.9 In 1929 an automatic apportionment act was passed.10 In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats: The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method). The apportionment that would come from the Webster method. The apportionment that would come from the newly introduced Hill method. If one reads congressional discussion from the time, then it will be good to note that Webster's method is sometimes called the method of major fractions and Hill's method is sometimes called the method of equal proportions. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill's method was declared to be the recommendation of the Academy.11 From 1930 on, Hill's method has been used. Why use the Hill method? The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the Alabama paradox was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one more seat available led to Alabama receiving one fewer seat. The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section The Apportionment Act of 1792). Another paradox was discovered in 1900, known as the Population paradox. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia's population was larger and growing much more rapidly. In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901. Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it's still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.12 The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn't suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.13 Understanding the modern Hill method in practice Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that Pi is the population of state i, and Ri is the number of representatives allocated to state i. The Hill method seeks to minimize \frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1 whenever Pi/Ri > Pj/Rj. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size. We can work out a different way of understanding this apportionment that is easier to implement in practice. Suppose that we have allocated all of the representatives to each state and state j has Rj representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states i and j with Pi/Ri > Pj/Rj. (If this isn't possible then the allocation is perfect). We can ask if it would be a good idea to move one representative from state j to state i, since state j's constituency sizes are smaller. This can be thought of as working with Ri′=Ri + 1 and Rj′=Rj − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that Pj/Rj′>Pi/Ri′ (since otherwise the relative difference is strictly smaller) and \frac{P_jR_i'}{P_iR_j'} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1 (since the relative difference must be at least as large). This is equivalent to \frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i} \iff \frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}. As every variable is positive, we can rewrite this as \frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2} We've shown that $(2)$ must hold whenever Pi/Ri > Pj/Rj in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states i and j. Clearly it holds if i = j as the denominator on the left is strictly smaller. If we are in the case when Pj/Rj > Pi/Ri, then we necessarily have the chain Pj/(Rj − 1)>Pj/Rj > Pi/Ri > Pi/(Ri + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case. This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill's method is the largest fraction $$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$ being too large. (Some call this term the Hill rank-index). An iterative Hill apportionment This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for n seats, we can get an apportionment for n + 1 seats by assigning the additional seat the any state i which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$. Further, it can be shown that there is a unique apportionment in Hill's method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique. This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment. Additional notes: Consequences of the 1870 and 1990 Apportionments The 1870 Apportionment Officially, Dean's method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton's method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean's method, not Hamilton's method. Specifically, New York and Illinois were each given one fewer seat than Hamilton's method would have given, while New Hampshire and Florida were given one additional seat each. There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully. One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean's method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden's 184. There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it's widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the "Reconstruction" period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South. Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 26914. If Jefferson's method had been used, then Gore would have won with 271 votes to Bush's 266. These decisions have far-reaching consequences! Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010. Balinski, Michel L., and H. Peyton Young. "The quota method of apportionment." The American Mathematical Monthly 82.7 (1975): 701-730. Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047. Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011. Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110. Peskin, Allan. "Was there a Compromise of 1877." The Journal of American History 60.1 (1973): 63-75. US Census Results US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html George Washington's collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877 Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg Posted in Data, Expository, Mathematics, Politics, Story | Tagged apportionment, election, Hill apportionment | Leave a comment Segregation, Gerrymandering, and Schelling's Model by David Lowry-Duda Posted on February 3, 2018 [This note is more about modeling some of the mathematics behind political events than politics themselves. And there are pretty pictures.] Gerrymandering has become a recurring topic in the news. The Supreme Court of the US, as well as more state courts and supreme courts, is hearing multiple cases on partisan gerrymandering (all beginning with a case in Wisconsin). Intuitively, it is clear that gerrymandering is bad. It allows politicians to choose their voters, instead of the other way around. And it allows the majority party to quash minority voices. But how can one identify a gerrymandered map? To quote Justice Kennedy in his Concurrence the 2004 Supreme Court case Vieth v. Jubelirer: When presented with a claim of injury from partisan gerrymandering, courts confront two obstacles. First is the lack of comprehensive and neutral principles for drawing electoral boundaries. No substantive definition of fairness in districting seems to command general assent. Second is the absence of rules to limit and confine judicial intervention. With uncertain limits, intervening courts–even when proceeding with best intentions–would risk assuming political, not legal, responsibility for a process that often produces ill will and distrust. Later, he adds to the first obstacle, saying: The object of districting is to establish "fair and effective representation for all citizens." Reynolds v. Sims, 377 U.S. 533, 565—568 (1964). At first it might seem that courts could determine, by the exercise of their own judgment, whether political classifications are related to this object or instead burden representational rights. The lack, however, of any agreed upon model of fair and effective representation makes this analysis difficult to pursue. From Justice Kennedy's Concurrence emerges a theme — a "workable standard" of identifying gerrymandering would open up the possibility of limiting partisan gerrymandering through the courts. Indeed, at the core of the Wisconsin gerrymandering case is a proposed "workable standard", based around the efficiency gap. Thomas Schelling and Segregation In 1971, American economist Thomas Schelling (who later won the Nobel Prize in Economics in 2005) published Dynamic Models of Segregation (Journal of Mathematical Sociology, 1971, Vol 1, pp 143–186). He sought to understand why racial segregation in the United States seems so difficult to combat. He introduced a simple model of segregation suggesting that even if each individual person doesn't mind living with others of a different race, they might still choose to segregate themselves through mild preferences. As each individual makes these choices, overall segregation increases. I write this post because I wondered what happens if we adapt Schelling's model to instead model a state and its district voting map. In place of racial segregation, I consider political segregation. Supposing the district voting map does not change, I wondered how the efficiency gap will change over time as people further segregate themselves. It seemed intuitive to me that political segregation (where people who had the same political beliefs stayed largely together and separated from those with different political beliefs) might correspond to more egregious cases of gerrymandering. But to my surprise, I was (mostly) wrong. Let's set up and see the model. Posted in Expository, Mathematics, Politics, Programming, Python | Tagged gerrymandering, python, Thomas Schelling | Leave a comment The Hawaiian Missile Crisis I read an article from Doug Criss on CNN yesterday with the title "Hawaii's governor couldn't correct the false missile alert sooner because he forgot his Twitter password."1 It turns out that Governor Ige knew within two minutes that the alert was a false alarm, but (in the words of the article) "he couldn't hop on Twitter and tell everybody — because he didn't know his password." There are a couple of different ways to take this story. The most common response I have seen is to blame the employee who accidentally triggered the alarm, and to forgive the Governor his error because who could have guessed that something like this would happen? The second most common response I see is a certain shock that the key mouthpiece of the Governor in this situation is apparently Twitter. There is some merit to both of these lines of thought. Considering them in turn: it is pretty unfortunate that some employee triggered a state of hysteria by pressing an incorrect button (or something to that effect). We always hope that people with great responsibilities act with extreme caution (like thermonuclear war). How about a nice game of global thermonuclear war? So certainly some blame should be placed on the employee. As for Twitter, I wonder whether or not a sarcasm filter has been watered down between the Governor's initial remarks and my reading it in Doug's article for CNN. It seems likely to me that this comment is meant more as commentary on the status of Twitter as the President's preferred 2 medium of communicating with the People. It certainly seems unlikely to me that the Governor would both frequently use Twitter for important public messages and forget his Twitter credentials. Perhaps this is code for "I couldn't get in touch with the person who manages my Twitter account" (because that person was hiding in a bunker?), but that's not actually important. (more…) Posted in Politics, Story | Tagged Parable for the Nuclear Age | Leave a comment We begin bombing Korea in five minutes: Parallels to Reagan in 1984 by David Lowry-Duda Posted on January 4, 2018 On a day when President and Commander-in-Chief Donald Trump tweets belligerent messages aimed at North Korea, I ask: "Have we seen anything like this ever before?" In fact, we have. Let's review a tale from Reagan. August 11, 1984: President Reagan is preparing for his weekly NPR radio address. The opening line of his address was to be My fellow Americans, I'm pleased to tell you that today I signed legislation that will allow student religious groups to begin enjoying a right they've too long been denied — the freedom to meet in public high schools during nonschool hours, just as other student groups are allowed to do.1 During the sound check, President Reagan joked My fellow Americans, I'm pleased to tell you today that I've signed legislation that will outlaw Russia forever. We begin bombing in five minutes. http://davidlowryduda.com/wp-content/uploads/2018/01/ReaganBombsRussia.mp3 This was met with mild chuckles from the audio technicians, and it wasn't broadcast intentionally. But it was leaked, and reached the Russians shortly thereafter. They were not amused. The Soviet army was placed on alert once they heard what Reagan joked during the sound check. They dropped their alert later, presumably when the bombing didn't begin. Over the next week, this gaffe drew a lot of attention. Here is NBC Tom Brokaw addressing "the joke heard round the world" The Pittsburgh Post-Gazette ran an article containing some of the Soviet responses five days later, on 16 August 1984.2 Similar articles ran in most major US newspapers that week, including the New York Times (which apparently retyped or OCR'd these statements, and these are now available on their site). The major Russian papers Pravda and Izvestia, as well as the Soviet News Agency TASS, all decried the President's remarks. Of particular note are two paragraphs from TASS. The first is reminiscent of many responses on Twitter today, Tass is authorized to state that the Soviet Union deplores the U.S. President's invective, unprecedentedly hostile toward the U.S.S.R. and dangerous to the cause of peace. The second is a bit chilling, especially with modern context, This conduct is incompatible with the high responsibility borne by leaders of states, particularly nuclear powers, for the destinies of their own peoples and for the destinies of mankind. In 1984, an accidental microphone gaffe on behalf of the President led to public outcry both foreign and domestic; Soviet news outlets jumped on the opportunity to include additional propaganda3. It is easy to confuse some of Donald Trump's deliberate actions today with others' mistakes. I hope that he knows what he is doing. Posted in Politics | 1 Comment
CommonCrawl
Estimation of source, path, and site factors of S waves recorded at the S-net sites in the Japan Trench area using the spectral inversion technique Yadab P. Dhakal ORCID: orcid.org/0000-0002-0399-48431, Takashi Kunugi1, Hiroaki Yamanaka2, Atsushi Wakai1, Shin Aoi1 & Azusa Nishizawa1 Earth, Planets and Space volume 75, Article number: 1 (2023) Cite this article S-net is a large-scale ocean bottom (OB) network in the Japan Trench area, consisting of inline-type 150 observatories equipped with seismometers and pressure gauges. Among them, 41 observatories have been buried about one meter beneath the seafloors in the shallow water regions (water depth <1500 m). We analyzed the strong-motion data recorded at the S-net sites from earthquakes with magnitudes 3.5 < Mw ≤ 7 and focal depths < 70 km to understand the site amplification effect on the recorded motions. We used the spectral inversion technique and obtained some fundamental properties of the earthquake source spectra, path attenuation, and site factors from the horizontal-component S-wave portions of the recordings. We obtained that the source spectra followed the \({\omega }^{-2}\) source model generally well, and the estimated magnitudes were mostly within ± 0.3 magnitude units of the catalog magnitudes. The stress drops increased systematically with the focal depths, and the values for the shallow earthquakes in the Pacific Plate were higher than those for the interplate earthquakes with comparable focal depths. The path-averaged quality factors were generally frequency-dependent and were somewhat larger than those in the past studies. The peak site frequencies ranged between about 0.2 and 10 Hz, while the peak amplification factors ranged between 10 and 50. Even though the peak frequencies and amplification factors differed from site to site, the peak frequencies were mostly higher than 2 Hz at the outer trench stations, while they were lower than 2 Hz at many inner trench stations. The amplification factors at a few OB sites in the shallow water regions were comparable with the theoretical ones computed from the 1-D subsea model. The amplification factors at intermediate frequencies (~ 0.3 to 2 Hz) generally increased with P-wave travel time in the sediments estimated from the multi-channel seismic survey. At about 20% of the sites (mainly at the unburied stations), spurious site spectra were recognized at frequencies higher than about 4 Hz. If the site spectra between 4 and 10 Hz are required, using only the X-component records is recommended. S-net is a large-scale ocean-bottom network for earthquakes and tsunamis in the Japan Trench area established after the 2011 Tohoku–oki earthquake disaster. One of the main objectives of the establishment of the S-net was to enhance the Japan Meteorological Agency (JMA) earthquake early warning (EEW) and tsunami early warning (Aoi et al. 2020). The network consists of inline-type 150 observatories divided into six segments, namely, S1, S2, S3, S4, S5, and S6 (Fig. 1). It transmits waveform data to the data center of the National Research Institute for Earth Science and Disaster Resilience (NIED) continuously. The observatories and cables in shallow water regions (water depth < 1500 m) have been buried one meter beneath the seafloors to protect them from fishing activities and increase ground coupling except for three stations, S1N08, S1N12, and S6N24. The total numbers of buried and unburied stations are 41 and 109, respectively. Index maps. Left panel: black-filled triangles denote the recording stations on land (KiK-net) and color-filled polygons denote the recording stations on the ocean bottom (S-net). The six segments of the S-net (S1–S6) are denoted by different polygons, with station codes at the start and end of each segment. The short-dashed rectangle indicates the location of sites, namely, S3N26, S3N25, S3N24, S3N23, S3N22, S3N21, S3N20, S3N19, and S6N12 from left to right, respectively, discussed later. The NAM, PHS, and ST are shorthand for the North-American Plate, Philippine Sea Plate, and Sagami Trough, respectively. Right panel: location of the epicenters of earthquakes used in the present study. The depth to the Pacific Plate is marked with contours (thin dashed lines) at interval of 10 km. The 10 km contour coincides roughly with the Trench axes. The plate-depth data were taken from Hirose (2022) Several previous studies reported that the average amplitudes of the long-period ground motions between about 1 and 20 s were larger at the ocean bottom seismograph sites in the Nankai Trough area (e.g., Nakamura et al. 2015; Kubo et al. 2019) and the Japan Trench area (Dhakal et al. 2021) compared to the average amplitudes at the strong-motion stations on land. Hayashimoto et al. (2019) constructed an equation to estimate the magnitude of the suboceanic earthquakes for EEW using the vertical component records at the seafloor sites to avoid the large amplification of the horizontal-component ground motions due to sediments and also to minimize the effect of the rotational noises during strong shakings. These studies have indicated that understanding the site amplification effects on the recorded motions is essential for the reliable estimation of magnitudes and ground motions for an EEW. Therefore, one of the main objectives of this paper is to evaluate the S-wave site amplification factors at the S-net sites in the Japan Trench area based on the recorded strong-motion data. In the present study, we used the spectral inversion technique (e.g., Andrews 1986; Iwata and Irikura 1988), described at some length in a later section, to simultaneously separate the source, path, and site factors of S waves recorded at the S-net stations. Therefore, the analysis of the source and path factors was an integral part of the present study to validate them and infer the site amplification factors. It also must be emphasized that the study of source spectra of earthquakes is crucial to understand the source processes of earthquakes and predict strong-motions from future earthquakes (e.g., Aki 1967; Hanks 1979; Morikawa and Sasatani 2003; Allmann and Shearer 2009). In addition, as the seismic waves in the present study were observed in wide offshore area, it was of interest to examine the path-averaged quality factor of S waves in the offshore region as most past studies were based on recorded motions on land (e.g., Nakamura et al. 2006; Nakano et al. 2015). In the next two sections, we describe the data and methods, respectively. Then, we present and discuss the source spectra of S waves. We present estimated magnitudes of the earthquakes and compare them with the catalog magnitudes. We also derive stress drops of the earthquakes and discuss them. Following the discussion of the source spectra, the path-averaged quality factors for S waves are presented and compared with several previous studies. Finally, we present and discuss the site amplification factors at the S-net sites in some details. It is expected that the results presented in this study provide a basis for more detailed studies in the future about developing ground-motion prediction models for EEW and other seismological applications. We prepared the initial earthquake data set based on the moment tensor catalog of F-net, NIED. Then, we obtained 10 min of continuously recorded accelerograms at the S-net stations, beginning from 1 min before the earthquake origin time, from more than 1500 earthquakes, which occurred between February 2016 and October 2021. The moment magnitudes (Mw) of the earthquakes ranged between 3.8 and 7.0, and focal depths were shallower than 70 km. S-net sensors are housed inside cylindrical pressure vessels and record waveforms in three mutually perpendicular directions. However, the sensor axes are not aligned in the horizontal and vertical directions. In this study, conversions of the waveform data from the sensor axes to the horizontal and vertical directions were carried out following the procedures in Takagi et al. (2019). The present study used waveform data from two horizontal directions, namely, X and Y. The X direction coincides with the direction of the long axis of the cylindrical pressure vessel, and the Y direction is perpendicular to the X direction. In the present study, all records were processed uniformly. The mean of a 1-min pre-event noise window was subtracted at each time step for the S-net records. In this paper, the mean or average implies the arithmetic mean of the data under consideration. For the land-station records, the mean of 10 s pre-event noise window was subtracted if the noise window was available; otherwise, the mean of the whole record was subtracted. Then, fourth-order low-cut filtering was applied to suppress low-frequency noises at 0.07 Hz. The S-wave arrival times were picked manually. After several trial analyses, the following magnitude-dependent S-wave time windows were selected: 8 s for Mw < 4.5, 12 s for 4.5 ≤ Mw < 5, 16 s for 5 ≤ Mw < 5.5, and 20 s for Mw ≥ 5.5. The one second time windows just before and after the defined time windows for the S waves were cosine-tapered. Then, zeroes were padded in the rear end, such that the window length remained 40.96 s for all recordings. The equal length allows one to evaluate the Fourier spectra at identical frequency points between all recordings. To evaluate the signal-to-noise ratios (SNRs), noise time histories of equal lengths with the corresponding S-wave time histories were selected from the pre-event parts of the records, and cosine tapering and zero paddings were applied to the noise time histories similar to that for the S-wave parts as explained above. Then, Fourier spectra were computed and smoothed using a Parzen window of 0.2 Hz for both S-wave and noise parts of equal durations of 40.96 s. Finally, the SNRs were obtained as the ratios of mean values of the smoothed spectra of two horizontal components between the S-wave and noise parts. Final records were selected after imposing the following criteria. The vector peak ground accelerations (PGAs) of the original records were between 5 and 50 cm/s2. The maximum PGA value of 50 cm/s2 was selected to avoid nonlinear site response and minimize the effect of the rotational noises in the records (e.g., Dhakal et al. 2017; Nakamura and Hayashimoto 2019; Takagi et al. 2019). The focal depths were shallower than 70 km to avoid the complicated attenuation effect for the deeper earthquakes. The epicentral distances were between 20 and 200 km. The choice of the distance limits was arbitrary, but it was expected that the minimum distance of about 20 km reduced the effect of the location errors for the records at small epicentral distances. The records were selected if the SNRs were larger than three at all frequencies between 0.1 and 20 Hz. At least three records exceeding the above threshold SNRs were available at each station and for each earthquake. A preliminary analysis of the focal and centroid moment tensor depths estimated by the JMA showed that the two depth parameters differed largely for many events, even though the magnitudes of the events were smaller than about 5.5. The F-net, NIED, moment tensor solution obtains the depths of the events keeping the epicentral locations the same as that of the JMA epicentral locations. We found that the JMA centroid depths and NIED moment tensor depths were similar. In this study, we used the epicentral location from the JMA and depth from the F-net moment tensor solution to calculate the source-to-site distance used in the spectral inversion described in the next section. Hereafter, unless stated otherwise, the term 'distance' is used to mean the source-to-site distance, and the 'depth' in the sense of depth obtained in the F-net NIED moment tensor solution. The location of the S-net stations and the epicenters of the selected events used in this study are depicted in the left and right panels of Fig. 1, respectively. The final data set contained 6326 recordings from 605 earthquakes. The general distributions of the data set in terms of the magnitude and distance, magnitude and depth, S-wave arrival time and distance, and the peak ground acceleration and distance are shown in Fig. 2a–d, respectively. General distribution of data in terms of magnitude and distance a, magnitude and depth b, S-onset time and distance c, and PGA and distance d Spectral inversion method is a simple yet powerful technique for estimation of either site amplifications, path attenuation, and source parameters of earthquakes or their combinations and has been used in several previous studies (e.g., Andrews 1986; Iwata and Irikura 1988; Castro et al. 1990; Kato et al. 1992; Yamanaka et al. 1998; Satoh and Tatsumi 2002; Bindi et al. 2004; Tsuda et al. 2006; Oth et al. 2011; Ren et al. 2013; Nakano et al. 2015; Klimasewski et al. 2019; Fletcher and Boatwright 2020). In the present study, the observed S-wave acceleration Fourier spectra were expressed in the form shown in the following equation: $${O}_{ij}\left(f\right)={S}_{i}\left(f\right){G}_{j}\left(f\right){R}_{ij}^{-1}exp\left\{\frac{-\pi {R}_{ij}f}{{Q}_{s}(f){V}_{s}}\right\}$$ where \({O}_{ij}(f)\) is the observed S-wave Fourier amplitude spectrum at the jth station for the ith earthquake at frequency \(f\), \({S}_{i}\left(f\right)\) is the source spectral amplitude of the ith earthquake at frequency \(f\), \({G}_{j}\left(f\right)\) is the site amplification factor at the jth station at frequency \(f\), \({Q}_{s}\left(f\right)\) is the average quality factor for S wave along the propagation path at frequency \(f\), \({R}_{ij}\) is the distance between the jth site and ith event, and \({V}_{s}\) is the average S-wave velocity along the propagation path. In the present study, the Fourier spectral amplitudes were computed as the mean values of the two horizontal components, as described in the previous section. The spectral amplitudes between 3 and 20 Hz were resampled with interval of approximately 0.1 Hz to save the computational time, and the results were obtained at 294 frequencies between ~ 0.0732 and 20 Hz. Equation (1) was linearized by taking base-10 logarithms and was solved by a least-square method after rearranging the terms and using the constrained conditions explained in the next paragraph (see Additional file 1 for details). The least-square solution obtained in this study may be expressed, as shown in the following equation (e.g., Searle 1971): $$\widehat{x}= {\left({A}^{T}A\right)}^{-1}{A}^{T}b$$ where \(\widehat{x}\) is a solution set of unknown parameters (the source, path, and site factors), A is the design matrix of known values for the source, path, and site terms, and b is the set of observations for a given frequency. The T and −1 in Eq. (2) indicate the usual transpose and inverse matrix operations, respectively. Reasonably, as the method was used by Andrews in 1986, a constrained condition is required in the inversion due to a trade-off between independent parameters. For example, Iwata and Irikura (1988) used the constrained condition that the site amplification factors were greater than two at all sites, while Yamanaka et al. (1998) and Nakano et al. (2015) used the theoretical amplification factors evaluated at a reference site. In the present study, we used the approach employed by Yamanaka et al. (1998) and Nakano et al. (2015). The theoretical amplification factors at two reference sites on land were used as constrained conditions. The use of one reference site could be sufficient for the constrained condition. However, using two or more reference sites provides more robust results if the reference sites are the rock sites. The theoretical amplification factors at the two reference sites, namely, IBRH14 and MYGH11, were calculated based on the structure estimated by the inversion of the surface-to-borehole spectral ratios (SBSRs) (see Fig. 1 for the location of the sites). Simulated annealing technique (e.g., Yamanaka 2005) was used to invert the spectral ratios for velocity models. The borehole sensors at the two sites were set up at the depths of 100 and 207 m in the layers having S-wave velocities of 3.2 and 2.78 km/s, respectively, according to the PS-logging data (see Additional file 2). The observed SBSR at the two sites (IBRH14 and MYGH11) computed from several earthquake records and theoretical SBSR using the velocity models obtained from the inversion of the observed SBSR are shown in Fig. 3a, d, respectively. The velocity models are shown in Fig. 3b, e, respectively. The theoretical amplification factors, used as constraints in the spectral inversion, are plotted in Fig. 3c, f, respectively. The layer parameters used to compute the theoretical values of the SBSR and site amplification factors are presented in the Additional file 2. The plots show that the observed spectral ratios are mostly one at frequencies lower than 2 Hz, suggesting that the shallow sediments produce small or no amplification at those frequencies. The theoretical amplification factors of about two at those frequencies show the free surface effects. In contrast to the small amplification at the lower frequencies at the reference sites, amplification factors of about 10 was seen at frequencies around 10 Hz at both sites. The estimated Vs values at the depths of borehole sensors were about 2.8 km/s, close to the PS-logging values mentioned previously. Hence, the amplification factors obtained at the other sites from the solution of Eq. (1) might be considered close to the absolute site amplification factors above the seismic basement of ~ 3 km/s (e.g., Nakano et al. 2015). The Vs value was 3.45 km/s in Nakano et al. (2015) referred to as the seismic basement. We used \({V}_{s}\) equal to 3.5 km/s in Eq. (1), and the source, path, and site factors obtained in the present study are described in the next sections. a, d Surface-to-borehole spectral ratios (SBSR) at the IBRH14 and MYGH11 sites, respectively. Grey lines denote the SBSR for the individual earthquakes, while the black lines denote the mean values. The red lines denote the theoretical SBSR computed from the S-wave velocity models shown in the panels b and e, respectively. c, f Theoretical amplification factors at the IBRH14 and MYGH11 sites computed from the velocity models shown in the panels b and e, respectively. The theoretical values were computed using the propagator matrix method (Ch. 5, Aki and Richards 2002), assuming vertically propagating plane S wave. See Additional file 2 for the values of layer parameters used in the theoretical computation of the spectral ratios and amplification factors Source spectra The obtained acceleration source spectra, \({S}_{i}\left(f\right)\), from the inversion were multiplied by \({\omega }^{-2}\), where \(\omega\) is the circular frequency equal to \(2\pi f\), to obtain the displacement source spectra. The plots of displacement source spectra are shown in Fig. 4 for nine example events (see Table 1 for details about the events). The \({\omega }^{-2}\) source spectral model shown in Eq. (3) for far-field S-wave spectra (e.g., Aki 1967; Brune 1970, 1971) were fitted with the displacement spectra, searching for the flat spectral level, \(\Omega\), and the corner frequency, \({f}_{c}\), which minimized the misfit function defined by Eq. (4) (e.g., Satoh and Tatsumi 2002) for each earthquake separately: $$D(f) =\frac{\Omega }{1+ {\left(\frac{f}{{f}_{c}}\right)}^{2}}$$ where \(D(f)\) is the displacement spectral amplitude at frequency, \(f\): $$\mathrm{misfit}= \sum \frac{1}{f}{\left({\mathrm{log}}_{10}\left(\frac{{S}_{f}}{{D}_{f}}\right)\right)}^{2}df$$ where \({S}_{f}\) is the source spectral amplitude obtained from inversion at frequency \(f\), \({D}_{f}\) is the model spectral amplitude at frequency \(f\), and \(df\) is the frequency width, \({f}_{i+1}- {f}_{i}\). Considering the smaller SNRs at lower frequencies and simple forms for the source spectral model, the summation in Eq. (4) was taken over three frequency ranges based on the magnitudes of the earthquakes: 0.2–10 Hz for Mw ≤ 5, 0.1–10 Hz for 5 < Mw ≤ 6.0, and 0.07–10 Hz for Mw > 6. Location map of example events and their source spectra obtained from the spectral inversion. a Focal-mechanism plots coincide with the locations of corresponding epicenters, and the colors of the compression parts indicate the corresponding depths to the foci. b Black lines denote the displacement source spectra obtained from the inversion and red lines show the fitted lines using the \({\omega }^{-2}\) source model. The labels, C1, C2, C3 means three crustal events, whose catalog magnitudes are indicated with letter M, respectively. The estimated magnitudes based on the flat level of the \({\omega }^{-2}\) source models are given inside the parentheses. c, d Similar to b, but for the interplate and intraslab earthquakes, respectively. See Table 1 for details about the events Table 1 Source parameters of the earthquakes, whose epicentral locations and source spectra are shown in Fig. 4 The fitted source spectral models for the nine example events mentioned above are also shown in Fig. 4. It was found that the \({\omega }^{-2}\) model fits the source spectra generally well within the range of frequencies under consideration. Even though the spectra were fitted for frequencies up to 10 Hz, the fits were reasonably good up to 20 Hz for the crustal and interplate earthquakes (Fig. 4b, c), while the models underestimated the spectra to some extent at frequencies over 10 Hz for the intraslab earthquakes (Fig. 4d). The earthquake types are discussed later. We estimated magnitudes of the earthquakes from the source spectra obtained in this study and compared them with the catalog magnitudes. First, we calculated the seismic moment, \({M}_{o}\), using Eq. (5) (e.g., Brune 1970; Iwata and Irikura 1988; Nakano et al. 2015), and then obtained the magnitude (Mw) using Eq. (6) (Hanks and Kanamori 1979): $${M}_{o}=4\pi \rho {V}_{s}^{3}R\Omega /{R}_{\theta \phi }P$$ $${M}_{w}=\frac{2}{3}{\mathrm{log}}_{10}{M}_{o}-10.7$$ where \(\rho\) is density, \({V}_{s}\) is S-wave velocity in the source region, R is distance (unit reference distance of 1 km in this study), \(\Omega\) is flat level of displacement source spectrum, \({R}_{\theta \phi }\) is the average radiation coefficient, and \(P\) is the energy partition ratio. Following Nakano et al. (2015), we used the \({V}_{s}\) values of 3.6 km/s for the crustal earthquakes and 4.0 km/s for the other earthquakes. Similarly, the densities of 2700 and 3000 kg/m3 were used for the source regions of the crustal and other earthquakes, respectively. The values of 0.63 and \(1/\sqrt{2}\) were used for \({R}_{\theta \phi }\) and P, respectively, as we used the mean value of the two horizontal components. These values were identical to those used by Nakano et al. (2015). A comparison of the F-net catalog and estimated magnitudes is shown in Fig. 5. We found that the two groups of magnitudes generally agree well, despite the fact that the catalog magnitudes were estimated using much lower frequency ground motions recorded by broadband seismic stations. The differences were mostly smaller than 0.3 magnitude units except for a few events. Comparison between the F-net NIED catalog magnitudes and the magnitudes estimated from the source spectra. The correlation coefficient (r) and root mean square error (rmse) between the catalog and estimated magnitudes are indicated in the plot Stress drop is an important source parameter used in the simulation of high-frequency ground motions and bears special interest in seismology (e.g., Boore and Atkinson 1987). Here, we discuss the stress drop values assuming simple circular fault models following the previous studies (e.g., Oth et al. 2010; Nakano et al. 2015). First, the source radius for each event was calculated using Eq. (7), and then the stress drop was calculated using Eq. (8) (Brune 1970, 1971): $$r =0.37 \times \frac{{V}_{s}}{{f}_{c}}$$ where \(r\) is radius of the source, \({V}_{s}\) is S-wave velocity, and \({f}_{c}\) is corner frequency: $$\Delta \sigma =\frac{7}{16}\times \frac{{M}_{0}}{{r}^{3}}\times {10}^{-5}$$ where \(\Delta \sigma\) is stress drop, and \({M}_{0}\) and \(r\) are defined in Eqs. (5) and (7), respectively. Expressing the seismic moment and radius in Nm and m, respectively, Eq. (8) gives the stress drop values in units of bar equal to \({10}^{5}\mathrm{ N }{\mathrm{m}}^{-2}\). The stress-drop values for the earthquakes used in this study are plotted as a function of depths in Fig. 6. It was found that the stress drops generally increased with the depths (Fig. 6a). The logarithms of stress drops were fitted with depths assuming a linear relationship, which is given in the following equation: $${\mathrm{log}}_{10}(\Delta \sigma )=1.2768+0.0124D$$ where \(\Delta \sigma\) is stress drop in unit of bar as abovementioned, and D is depth in unit of kilometer. The stress-drop residuals obtained as the logarithmic ratios between the results obtained from Eqs. (8) and (9) are shown as a function of depths in Fig. 6b. a Plots of stress drop versus depth. The values for the different earthquake types are denoted by different symbols. The PAC-C (red squares) and NAM-C (blue triangles) in the legends denote the crustal earthquakes in the Pacific and North American Plates, respectively. The Unclassified (inverted triangles, green) means the earthquakes, whose tectonic types were not ascertained. b Plots of stress-drop residuals against depth. The different symbols correspond to the different earthquake types as indicated in a above. The filled symbols denote the mean residuals for the corresponding earthquake types computed in the interval of 10 km and the vertical bars denote the range of one standard deviation. The mean residuals are shifted slightly in horizontal directions from the centers of the corresponding intervals for clarity To get an idea about the variation of the stress drops between the earthquakes, the earthquakes were divided into four types based on their tectonic origins: interplate (IP) earthquakes, intraslab (S) earthquakes, crustal earthquakes in the Pacific Plate (PAC-C), and crustal earthquakes in the North American Plate (NAM-C). The earthquake types were decided based on their hypocentral locations (JMA), focal mechanisms obtained from the F-net NIED moment tensor solutions, and plate-boundary data compiled by Hirose (2022). Some earthquakes could not be classified easily and were grouped as unclassified. It was found that the mean stress-drop residuals for the shallow earthquakes in the Pacific Plate (depths < 30 km) were larger than the mean residuals for the interplate earthquakes within the corresponding depth ranges. The difference was quite large (almost a factor of four, ~ 0.6 on the base 10 log scale) for depths shallower than 10 km. Allmann and Shearer (2009) reported that the stress drops for the intraplate earthquakes in the oceanic lithosphere were twice as high as those for the interplate earthquakes based on a global data set. For depths deeper than 30 km, the differences between the interplate and intraslab earthquakes were minor. The mean residuals for the crustal earthquakes (NAM-C) were small or near zero except for the depth range between 30 and 40 km, where the mean residual was noticeably larger than that of other groups of earthquakes. The mean residuals for the unclassified earthquakes were close to zero except for the depths between 10 and 20 km, in which the mean residual was larger than those from the other groups of earthquakes. However, the number of earthquakes was just two in the depth range, and it is likely that the events belonged to the intraslab type. The stress drops obtained in Nakano et al. (2015) were slightly larger for the intraslab earthquakes than the interplate ones for the deeper events. The stress-drop values in Nakano et al. (2015) and current study were quite comparable. For example, the average values for the interplate and intraslab earthquakes at a depth of about 40 km were about ~ 50–60 bars in Nakano et al. (2015, Fig. 17), which were close to the value of about 60 bars obtained in the present study. However, we found that the average values of the stress drops for the crustal earthquakes were slightly larger in the present study than those in Nakano et al. (2015). The spatial distributions of the stress drop for the interplate, intraslab, and crustal earthquakes are depicted in Additional file 3. The plots do not show a noticeable lateral variation of the stress drops but depict that the stress drops, on average, increase with the depth of the events, as discussed above. Path factors In the present study, we simply assumed the R−1 factor for geometrical attenuation. In this section, we discuss the path attenuation in terms of the quality factor, \({Q}_{s}\), for S wave. The \({Q}_{s}\) values obtained in the present study are plotted as a function of frequency in Fig. 7. The \({Q}_{s}\) values estimated in several previous studies using similar techniques and earthquake events from similar regions (e.g., Satoh and Tatsumi 2002; Nakamura 2009; Oth et al. 2011; Nakano et al. 2015) are also depicted in the plots for comparison. One major difference between the previous and present studies is that the results in the previous studies were derived using the records primarily from the stations on land as the seafloor observatories were limited or unavailable. As a result, the average paths covered by the seismic waves between the previous and present studies may have varied significantly. The \({Q}_{s}\) values obtained in this study showed the following features. The values generally increased linearly with frequencies up to 3 Hz, became almost constant between 3 and 10 Hz, and rose gently above 10 Hz (see Fig. 7). The relation between the frequencies and \({Q}_{s}\) values obtained in the present study is given in Eq. (10) for frequencies between 0.3 and 3.3 Hz. The values between 3.3 and 10 Hz may be considered constant, equal to the value at 3.3 Hz. The values at other frequencies may be regarded as similar to those to be explained shortly. $${Q}_{s}=310 {f}^{1.12}$$ Nakamura (2009) obtained 3-D attenuation structures beneath the Japan Islands. The region numbers 4 and 13 in Nakamura (2009) correspond to the 0–30 km and 30–60 km depth ranges in the fore-arc region of northeast Japan. The 30–60 km region overlaps with the present study area to some extent. The \({Q}_{s}\) values for the two regions in Nakamura (2009) are shown by the dashed and solid red lines in Fig. 7. The applicable frequency ranges in Nakamura (2009) were between 1 and 10 Hz. The plots show that \({Q}_{s}\) values obtained in this study were generally larger than those for the 30–60 km depth ranges in Nakamura (2009) in the applicable frequency ranges, though the differences were smaller near 1 and 10 Hz. Relationships between the Qs values and frequencies. The values obtained in the present study are denoted by circles. The red dashed and solid lines denote the relationships for the shallower (0–30 km) and deeper (30–60 km) parts of the fore-arc region in the north–east Japan (Nakamura 2009). The green dashed line, green solid line, and cyan-colored line denote the relationships for fore-arc region corresponding to the Japan Trench subduction zone for crustal (C), plate boundary (B), and intraslab (I) earthquakes, Region 2, in Nakano et al. (2015). The black dashed line denotes the relationship reported in Oth et al. (2011) for subcrustal earthquakes, and the blue line denotes the relationship reported in Satoh and Tatsumi (2002) for north–east Japan Nakano et al. (2015) obtained three sets of \({Q}_{s}\) values corresponding to the three types of earthquakes: crustal, plate boundary (or interplate), and intraplate (or intraslab), denoted by the letters C, B, and I, respectively, following the notations in their paper. The \({Q}_{s}\) values for the region number two in their study, close to the region in the present study, are shown for the different earthquake groups in Fig. 7. The applicable frequency ranges in Nakano et al. (2015) were between 0.5 and 20 Hz for the interplate and intraslab earthquakes, while they were between 0.5 and 5 Hz for the crustal earthquakes. The \({Q}_{s}\) values at lower and higher frequencies in the present study were close to the \({Q}_{s}\) values for the intraslab earthquakes in Nakano et al. (2015), while the \({Q}_{s}\) values at frequencies around 4–5 Hz were close to the crustal earthquakes in Nakano et al. (2015). The \({Q}_{s}\) values plotted in Fig. 7, marked with Oth et al. (2011), were for subcrustal earthquakes from wide onshore areas, the larger part of which faces the Pacific Ocean and Philippine Seas. The applicable frequency ranges were between about 0.5 and 20 Hz in Oth et al. (2011). The \({Q}_{s}\) values plotted in Fig. 7 for Satoh and Tatsumi (2002) were for subduction earthquakes in northeast Japan and were valid between 0.2 and 20 Hz. The \({Q}_{s}\) values in Oth et al. (2011), Nakanao et al. (2015) for interplate earthquakes (Nakano et al. (2015) B in Fig. 7), and Satoh and Tatsumi (2002) were noticeably smaller than the values in the present study, except for Satoh and Tatsumi (2002) at frequencies higher than about 10 Hz. In summary, the average \({Q}_{s}\) values obtained in this study are either similar to or larger than those in the previous studies mentioned above. The larger values in the present study are generally as expected, because the seismic rays travel a considerable portion of the high Q Pacific Plate (e.g., Umino and Hasegawa 1984) as the S-net sites are closer to the Pacific Plate compared to the networks in the other studies. Even though the path-averaged \({Q}_{s}\) values differed between the present and other studies to some extent, the slopes of the fitted lines between the frequencies and \({Q}_{s}\) values were around unity at most frequencies. Future studies are required to image the detailed \({Q}_{s}\) structure in the seafloor regions. Site spectra We discuss the site amplification factors obtained from the spectral inversion in this section. First, we present the site amplification factors at three KiK-net sites (stations on land), namely, FKSH19, IWTH23, and IWTH27 (see Fig. 1 for the location of the sites). The site amplification factors at the three sites are shown in Fig. 8. The mean SBSRs computed from several earthquake records at the sites are also shown in the figure. The borehole sensors were set up in the layers having Vs values of 3.06, 2.2, and 2.79 km/s at the FKSH19, IWTH23, and IWTH27, respectively, according to the PS-logging data. Therefore, the SBSRs may be considered equivalent to the site amplification factors at the sites with respect to a reference rock site. The peak frequencies of the spectral ratios were approximately 3.27, 16.06, and 7.34 Hz at the three sites, FKSH19, IWTH23, and IWTH27, respectively. The peak frequencies and their amplitudes matched well between the spectral ratios and the amplification factors at all the sites. The overall shapes of the curves for the spectral ratios and amplification factors were similar. The values of amplification factors at frequencies lower than about 2 Hz were smaller than two, suggesting no amplification effect by the sediments at the frequencies. The above comparison suggests that the obtained site amplification factors at the three sites were reasonable. Comparison of the mean surface-to-borehole spectral ratios of horizontal component recordings (black lines) and site amplification factors (red lines) from the spectral inversion at three KiK-net sites. a FKSH19, b IWTH23, and c IWTH27. See Fig. 1 for the location of the sites Hereafter, we describe the site spectra at the S-net sites. The site spectra at nine example sites are plotted in Fig. 9 (red lines) along with the individual-earthquake site spectra (grey lines) and theoretical amplification factors (blue lines) at each site computed from the J-SHIS (Japan Seismic Hazard Information Station) velocity model (Fujiwara et al. 2009, 2012). The sites were from the S3 segment except for one site and were located almost in the east–west direction (see Fig. 1 for the location of the sites). The sites were S3N26 (water depth 128 m), S3N25 (230 m), S3N24 (849 m), S3N23 (1220 m), S3N22 (1645 m), S3N21 (2779 m), S3N20 (5225 m), S3N19 (5591 m), and S6N12 (6111 m), from west to east directions, respectively. The panels a to i in Fig. 9 are arranged in the order of increasing water depths. The curves for the amplification factors from the inversion (red lines) showed multiple peaks and were generally different from site to site. The shallow water sites (a, b, c) showed relatively smaller amplifications (about ten or less) at lower frequencies compared with the values at the deeper water sites (d, e, f, g, h) (about ten or over) except for the deepest site, S6N12, panel i. The S6N12 site, among the nine sites, showed a peak amplification value of about 20 at approximately 6.68 Hz. The spectra at most sites showed a decreasing trend with frequencies higher than about 10 Hz. We explain later that the peak amplification factors at some sites were influenced to some extent by factors irrelevant to the site geology. Comparison of the site amplification factors (red lines) obtained from the spectral inversion with the theoretical amplification factors (blue lines) using the J-SHIS subsurface velocity model. The grey lines denote the site factors corresponding to each earthquake deduced by dividing the original spectra by the source spectrum of corresponding earthquake and path factors. The plots a, b, c, d, e, f, g, h, and i correspond to sites, namely, S3N26, S3N25, S3N24, S3N23, S3N22, S3N21, S3N20, S3N19, and S6N12, indicated in each panel (see Fig. 1 for the location of the sites). The number of earthquake records (N) used in the spectral inversion and the depth to the seafloor (D) are indicated for each site. See Additional file 4 for the J-SHIS model at the sites mentioned above The theoretical amplification factors computed for the vertical incidence SH wave using the J-SHIS velocity model above the seismic basement (Vs 3.2 km/s) are comparable with the results from the spectral inversion over wider frequencies at the two shallow-water sites, S3N26 (water depth 128 m) and S3N25 (230 m), as shown in Fig. 9a, b. Meanwhile, the theoretical values underestimated the site amplification factors from the inversion at the other sites. The differences generally increased with the increase of water depths. A similar tendency was seen at a few other shallow and deep-water sites (see the Additional file 4). The velocity profiles used to compute the amplification factors at the example sites are shown in the Additional file 4. While the total thicknesses of sedimentary layers above the Vs 3.2 km/s layer were between about 1.5 (S6N12, S3N24) and 4 km (S3N19), the thicknesses of layers with Vs < 1 km/s were between about 0.1 (S6N12) and 0.9 km (S3N20). The sites located toward the land from the Japan Trench axis showed greater thickness of the sediments than the sites located farther offshore from the trench axis. The obtained site amplification factors at the sites were generally consistent with the distribution of thickness of sedimentary layers in the J-SHIS model. Large amplification factors were obtained at the thicker sediment sites, such as the S3N19, S3N20, and S3N21, where the thicknesses of the sedimentary layers were about 3–4 km (see the Additional file 4). The J-SHIS model in the oceanic region was constructed using the limited geophysical survey data (Fujiwara et al. 2012) and lacked information for shallow sediments with velocities smaller than 600 m/s. Thus, the differences in site amplification factors between the inversion and J-SHIS model may be attributed to the preliminary nature of the velocity model. Moreover, the S-wave time windows selected in this study may have included the 3-D effects of the sedimentary layers around the sites (e.g., Dhakal and Yamanaka 2012). Despite such limitations, the similar amplification factors between the spectral inversion results and the theoretical values at some of the shallow water sites suggest that the site amplification factors obtained in this study may serve as a basis for the validation and improvement of the velocity models in the offshore region. Sawazaki and Nakamura (2020) analyzed the spectral ratios of coda portions of the ground motion records between the horizontal Y and X components and reported that the spectral ratios show a characteristic shape 'N' with peaks and troughs at about 7 and 13 Hz, respectively. They showed that the two frequencies coincided roughly with the natural vibration periods of the cylindrical pressure vessels in the Y and X directions, respectively. They suggested that the natural vibrations were induced due to the poor coupling between the seabed and cylindrical pressure vessels that housed the S-net sensors. The 'N-shaped' spectral ratios were conspicuous at several unburied seismometer sites. To avoid or minimize the effect of the natural vibrations of the pressure vessels on the estimated site spectra, we performed the spectral inversion of only the horizontal X-component spectra, and comparisons of the source, path, and site spectra were made with the values described previously (obtained using the mean spectra of horizontal X and Y components). The moment magnitudes, corner frequencies, stress drops, and \({Q}_{s}\) values derived from the two inversions are plotted in Fig. 10. We found that the source spectra and path factors from the two inversions were essentially the same in a statistical sense. a Comparison of the estimated magnitudes (Mw) between the inversions of the X-component spectra and the mean spectra of X and Y components. b, c Similar to the plot a, but for the corner frequencies (Hz) and stress drops, respectively. d Comparison of the quality factors between the two inversions plotted against the frequencies Example comparisons of the site spectra from the two inversions at nine selected sites, locations being depicted in Fig. 11, are shown in Fig. 12. In addition, shown in Fig. 12 are the mean spectral ratios between the horizontal Y and X component records (Y/X ratios). The site spectra and the Y/X ratios at three sites, namely, S2N14, S2N15, and S2N16, are shown in the panels a, b, and c, respectively. The water depths at the sites were 162, 264, and 874 m, respectively. The three sites were buried sites. It can be seen that the Y/X spectral ratios are near one, and the site spectra from the two inversions (the red and blue curves denoting the results for the mean spectra of two components and X-component spectra only) are very similar. Similarly, the panels d, e, f, g, h, and i in Fig. 12 show the site spectra and Y/X spectral ratios for the other six sites, where the sensor houses were not buried. The water depths ranged between 2247 and 6230 m. The site spectra from the two inversions showed similitude in shape and peak values. The Y/X spectral ratios had obvious troughs at frequencies near 10 Hz or higher at all the sites, but the peaks were not so prominent except for the two sites, namely, S2N21 and S2N22 (see the panels h and i in Fig. 12). The peak frequencies of the site spectra and the Y/X spectral ratios were generally different. However, we found that the site spectra at 30 sites (20% of the total stations) obtained from the inversion of the mean spectra of X and Y components were noticeably contaminated at frequencies that corresponded with the peaks and troughs of the Y/X spectral ratios. The list of 30 sites, along with the peak frequencies and other properties, is provided in the Additional file 5 (see Fig. 11 for the location of the sites). Out of the 30 sites, the example plots of the site spectra from the two inversions and the Y/X spectral ratios are shown in Fig. 13 for the selected nine sites. The peak amplitudes of the mean Y/X spectral ratios ranged between about 3 and 9, and the peak frequencies were between about 4 and 9 Hz at the 30 sites. The upper three panels (a, b, c) in Fig. 13 depict the spectra for the three sites, for which the maximum values of the Y/X spectral ratios were about 3; the sites in the middle three panels (d, e, and f) had the maximum Y/X spectral ratios of about 4. The last three panels (g, h, and i) had the maximum Y/X spectral ratios of about, 7, 7.5, and 9, respectively. The plots depict that the site spectra from the two inversions were generally similar except for the frequencies at and around the peaks and troughs of the Y/X spectral ratios. We found that the difference between the two inversion results becomes conspicuous when the peak value of the Y/X spectral ratios is about 3 or larger. Red circles denote the locations of the sites for which the site spectra are plotted in Fig. 12. The site codes are written near the site symbols. Red triangles denote the locations of the sites, where the site spectra at higher frequencies were spurious (see Fig. 13) Example comparison of the site spectra obtained from the spectral inversions of mean spectra of X- and Y-component records (red lines) and spectra of X-component records (blue lines). The plots a, b, c, d, e, f, g, h, and i correspond to sites, namely, S2N14, S2N15, S2N16, S2N17, S2N18, S2N19, S2N20, S2N21, and S2N22, as indicated in each panel. See Fig. 11 for the location of the sites. The black lines denote the mean spectral ratios between the horizontal Y- and X-component records. The number of earthquake records (N) used for each site and the depth to the seafloor (D) are indicated in each plot Example comparison of the site spectra obtained from the spectral inversions of mean spectra of X- and Y-component records (red lines) and spectra of X-component records (blue lines) at sites, where the spurious site spectra were recognized. The plots a, b, c, d, e, f, g, h, and i correspond to sites, namely, S4N26, S6N12, S4N10, S5N17, S5N15, S4N09, S6N16, S5N14, and S1N06, as indicated in each panel. See Fig. 11 for the location of the sites. The black lines denote the mean spectral ratios between the horizontal Y- and X-component records. The number of earthquake records (N) used for each site and the depth to the seafloor (D) are indicated in each plot It is interesting to see how the amplification factors from the spectral inversion were spatially distributed over different sites for a given frequency to get a general picture of the variation in the site spectra. The spatial distributions of the site amplification factors for three frequencies, approximately 0.33 Hz, 1 Hz, and 5 Hz, are shown in Fig. 14b–d, respectively. The amplification factors at 5 Hz are plotted from the analysis of X-component records only to avoid the spurious spectra discussed above. Figure 14a shows the two-way travel time in the sedimentary layers for the P wave obtained from the multi-channel seismic (MCS) reflection survey (Nishizawa et al. 2022). The outer trench sites, part of the S6 segment, mostly showed smaller values (~ 0.5 s) for the travel times. Similarly, the two-way travel times were smaller than 1 s at several sites close to the coast and in the S1 segment. At the other sites, the two-way travel times were mostly larger than 1 s. The longest two-way travel time was about 2.5 s beneath the S2N21 site (see Fig. 14a). The longer travel times indicate thicker low velocity sediments compared with the smaller ones. The site amplification factors for 0.33 and 1 Hz, depicted in Fig. 14b, c show the patterns roughly similar to the distribution of the two-way travel times shown in Fig. 14a. In contrast, the amplification factors for 5 Hz were larger than the ones for the lower frequencies at many sites in the S6 segment, where the two-way travel times were smaller. The median values of the amplification factors over the S-net sites were about 10 for the three frequencies, discussed above. The smaller travel times and larger amplification for the higher frequencies might suggest the thickness of the sediment to be relatively small in the outer trench sites. Azuma et al. (2019) obtained sediment thicknesses based on travel-time differences between P and PS converted waves at the sedimentary basement below the S-net sites and suggested that the sediment thicknesses in the landward of the trench tend to be thicker than those estimated by two-way travel times of the deepest reflectors on the MCS profiles (Nishizawa et al. 2022). We plan to integrate the travel-time information, site spectra, and the available subsea models to reconstruct the improved velocity models in our future studies. a Two-way travel time for P wave from the base of sediment to the seafloor estimated by multi-channel seismic (MCS) reflection survey (Nishizawa et al. 2022). The color filled squares and circles denote the sites located, within and beyond 5 km from the survey line, respectively. The open squares denote the sites, where the travel times were not estimated. b, c, and d Amplification factors at frequencies of approximately 0.33, 1, and 5 Hz, respectively Finally, the relationships between the two-way travel times and amplification factors at the frequencies of 0.33, 0.5, 1, 2, 5, and 10 Hz are presented in Fig. 15a–f, respectively. The amplification factors from the two inversions, using the mean spectra of X and Y components and X-component spectra only, showed similar trends with the two-way travel times as expected. The plots for the 0.33, 0.5, 1, and 2 Hz indicated that the amplification factors generally increased with the two-way travel times, albeit the small values of correlation coefficients (~ 0.25). We found that the correlation coefficients decreased with the increase in frequency over 2 Hz and was approximately zero at 10 Hz. The mean amplification factors were approximately 10 and 2 at frequencies of 5 and 10 Hz, respectively. These latter results may suggest that the thicker sediment layers dampen the higher frequency components more than the lower ones due to the lower Qs values in the sediments (e.g., Boore and Smith 1999). Relationships between the two-way travel times for P wave and site amplification factors at frequencies of approximately 0.33, 0.5, 1, 2, 5, and 10 Hz (panels a–f), respectively. The red and black lines indicate the linear fits between the travel times and logarithms of amplification factors. The numerals in parentheses indicate the values of correlation coefficients between the corresponding data sets. Note the difference in the scale of the vertical axis for 10 Hz f compared to the scales for other frequencies. The amplification factors are lower than five for 10 Hz at most sites We analyzed the strong-motion records at the S-net sites following the earthquakes with magnitudes 3.5 < Mw \(\le\) 7 and focal depths < 70 km and evaluated the source, path, and site spectra of S waves for understanding the S-wave site amplification factors at the sites. We used the spectral inversion technique to obtain the source, path, and site spectra simultaneously from the horizontal-component S-wave portions of the recordings. The source spectra were fitted with the conventional \({\omega }^{-2}\) source model searching for the flat levels of the spectra and corner frequencies, and it was found that the source spectra followed the \({\omega }^{-2}\) source model generally well for most earthquakes up to 10 Hz. The estimated magnitudes from the flat levels of the source spectra were mostly within ± 0.3 magnitude units of the F-net NIED catalog magnitudes. The stress drops increased with focal depths, which is consistent with the previous studies in the region. The stress drops for the shallow earthquakes in the Pacific Plate were higher than those for the interplate earthquakes with comparable focal depths, while the values were similar for events with depths greater than about 30 km. The path-averaged quality factors increased with frequency up to about 3 Hz, remained almost constant between about 3 and 10 Hz, and then increased gently up to 20 Hz. The values were either similar to or larger than those reported in the previous studies. The peak site frequencies were between about 0.2 and 10 Hz, and the peak amplification factors ranged between about 10 and 50. The peak frequencies and amplification factors generally differed from site to site. However, a moderate regional pattern was observed. The peak frequencies were mostly higher than about 2 Hz at the outer trench sites, while they were lower than 2 Hz at many inner trench sites. The amplification factors at a few shallow-water sites close to the coast were comparable with the theoretical ones computed from the J-SHIS subsea model. The amplification factors from the inversion were larger in the regions, where the sedimentary layers were thicker in the J-SHIS model. However, the amplification factors were considerably underestimated by the J-SHIS model at many sites over wide frequencies. The amplification factors at intermediate frequencies (~ 0.3 to 2 Hz) generally increased with the P-wave travel time in the sediments estimated from the multi-channel seismic reflection survey. All the above results, by and large, suggested that the site spectra were reasonable. However, it was found that the site spectra included dominant peaks at frequencies between about 4 and 10 Hz at about thirty unburied stations, resulting from the coupling problem between the instrument vessels and seabed sediments. We performed the spectral inversion using only the spectra from the horizontal X-component records and found that the spurious peaks at those sites were eliminated, while the source spectra and path factors were very similar to those from the joint use of both X- and Y-component records. We recommend avoiding the Y-component records if the site spectra between about 4 and 10 Hz are required at the unburied stations for any application. Moreover, the site spectra over about 10 Hz may not be reliable due to the higher modes in the Y directions and vibrations of the sensor vessels in the X directions. The results discussed in this paper may be used in the prediction of ground motions for EEW, engineering design of offshore structures, and so on. It is expected that the abovementioned results may also serve as a basis for more detailed future studies regarding the source properties of the subduction zone earthquakes, quality factor of the crust and mantle, and improvement of the subsurface velocity models in the region. The strong-motion recordings at the KiK-net sites and PS-logging data used in this study were obtained from NIED K-NET, KiK-net, National Research Institute for Earth Science and Disaster Resilience, https://doi.org/10.17598/nied.0004. https://www.kyoshin.bosai.go.jp/. The strong-motion recordings at the S-net sites were retrieved from NIED S-net, National Research Institute for Earth Science and Disaster Resilience, https://doi.org/10.17598/nied.0007. https://hinetwww11.bosai.go.jp/auth/download/cont/?LANG=en. The J-SHIS deep subsurface model was downloaded from the website: http://www.j-shis.bosai.go.jp/map/JSHIS2/download.html?lang=en. The hypocenter information of the events were taken from the website: https://www.data.jma.go.jp/svd/eqev/data/bulletin/hypo_e.html. The moment magnitudes and depths of the moment tensor solutions were taken from the website: http://www.fnet.bosai.go.jp/event/joho.php?LANG=en. The plate-boundary data used in the classification of earthquake types were retrieved from https://www.mri-jma.go.jp/Dep/sei/fhirose/plate/en.index.html. Readers are suggested to check the abovementioned websites for details about the original data, and to contact the authors for data request for other specific information, such as the list of earthquakes and the numerical values of the results discussed in the paper. All the websites were last-accessed on May 10, 2022. F-net: Full-range seismograph network JMA: Japan Meteorological Agency J-SHIS: Japan Seismic Hazard Information Station KiK-net: Kiban Kyoshin network MCS: Multi-channel seismic reflection survey Mj: JMA magnitude Mw: Moment magnitude NIED: National Research Institute for Earth Science and Disaster Resilience PAC: Pacific Plate PHS: Philippine Sea Plate S-net: Seafloor observation network for earthquakes and tsunamis along the Japan Trench Signal-to-noise ratio S-to-B: Surface-to-borehole S-wave: Secondary wave/shear wave Aki K (1967) Scaling law of seismic spectrum. J Geophys Res 72:1217–1231 Aki K, Richards PG (2002) Quantitative seismology. Second ed. University Science Books. Allmann BP, Shearer PM (2009) Global variations of stress drop for moderate to large earthquakes. J Geophys Res 114:B01310. https://doi.org/10.1029/2008JB005821 Andrews DJ (1986) Objective determination of source parameters and similarity of earthquakes of different size. In Earthquake Source Mechanics, Geophysical Monograph Series 37:259–267 Aoi S, Asano Y, Kunugi T, Kimura T, Uehira K, Takahashi N, Ueda H, Shiomi K, Matsumoto T, Fujiwara H (2020) MOWLAS: NIED observation network for earthquake, tsunami and volcano. Earth Planets Space 72:126. https://doi.org/10.1186/s40623-020-01250-x Azuma R, Takagi R, Toyokuni G, Nakayama T, Suzuki S, Sato M, Uchida N, Hino R (2019) Seafloor sediment thickness below S-net observatories revealed from PS conversion wave at the sedimentary base. Seismological Society of Japan, Fall meeting, S06-08 Bindi D, Castro RR, Franceschina G, Luzi L, Pacor F (2004) The 1997–1998 Umbria-Marche sequence (central Italy): source, path, and site effects estimated from strong motion data recorded in the epicentral area. J Geophys Res 109:B04312. https://doi.org/10.1029/2003JB002857 Boore DM, Atkinson GM (1987) Stochastic prediction of ground motion and spectral response parameters at hard-rock sites in eastern North America. Bull Seismol Soc Am 77(2):440–467. https://doi.org/10.1785/BSSA0770020440 Boore DM, Smith CE (1999) Analysis of earthquake recordings obtained from the Seafloor Earthquake Measurement System (SEMS) Instruments deployed off the coast of Southern California. Bull Seism Soc Am 89(1):260–274. https://doi.org/10.1785/BSSA0890010260 Brune JN (1970) Tectonic stress and the spectra of seismic shear waves from earthquakes. J Geophys Res 75:4997–5009 Brune JN (1971) Correction. J Geophys Res 76:5002 Castro RR, Anderson JG, Singh K (1990) Site response, attenuation and source spectra of S waves along the Guerrero, Mexico, subduction zone. Bull Seismol Soc Am 80(6A):1481–1503. https://doi.org/10.1785/BSSA08006A1481 Dhakal YP, Yamanaka H (2012) Delineation of S-wave time window in the Kanto basin for tuning velocity models of deep sedimentary layers. Int Symp Earthq Eng JAEE 1:85–94 Dhakal YP, Shin A, Kunugi T, Suzuki W, Kimura T (2017) Assessment of nonlinear site response at ocean bottom seismograph sites based on S-wave horizontal-to-vertical spectral ratios: a study at the Sagami Bay area K-NET sites in Japan. Earth Planets Space 69:29. https://doi.org/10.1186/s40623-017-0615-5 Dhakal YP, Kunugi T, Suzuki W, Kimura T, Morikawa N, Aoi S (2021) Strong motions on land and ocean bottom: comparison of horizontal PGA, PGV, and 5% damped acceleration response spectra in northeast Japan and the Japan Trench area. Bull Seism Soc Am 111:3237–3260. https://doi.org/10.1785/0120200368 Fletcher JB, Boatwright J (2020) Peak ground motions and site response at Anza and Imperial Valley, California. Pure Appl Geophys 177:2753–2769. https://doi.org/10.1007/s00024-019-02366-2 Fujiwara H, Kawai S, Aoi S, Morikawa N, Senna S, Kudo N, Ooi M, Hao KX, Hayakawa Y, Toyama N, Matsuyama H, Iwamoto K, Suzuki H, Liu Y (2009) A study on subsurface structure model for deep sedimentary layers of Japan for strong-motion evaluation. Technical Note of the National Research Institute for Earth Science and Disaster Prevention, No. 337 (in Japanese) Fujiwara H, Kawai S, Aoi S, Morikawa N, Senna S, Azuma H, Ooi M, Hao KX, Hasegawa N, Maeda T, Iwaki A, Wakamatsu K, Imoto M, Okumura T, Matsuyama H, Narita A (2012) Some improvements of seismic hazard assessment based on the 2011 Tohoku earthquake. Technical Note of the National Research Institute for Earth Science and Disaster Prevention, No. 379, 1–349 (in Japanese) Hanks TC (1979) b values and ω−γ seismic source models: implications for tectonic stress variations along active crustal fault zones and the estimation of high-frequency strong ground motion. J Geophys Res 84(B5):2235–2242. https://doi.org/10.1029/JB084iB05p02235 Hanks TC, Kanamori H (1979) A moment magnitude scale. J Geophys Res 84(B5):2348–2350. https://doi.org/10.1029/JB084iB05p02348 Hayashimoto N, Nakamura T, Hoshiba M (2019) A technique for estimating the UD-component displacement magnitude for earthquake early warnings that can be applied to various seismic networks including ocean bottom seismographs. Q J Seismol 83:1–10 (in Japanese with English abstract) Hirose F (2022) Plate configuration. Retrieved from https://www.mri-jma.go.jp/Dep/sei/fhirose/plate/en.index.html. Accessed 10 May 2022 Iwata T, Irikura K (1988) Source parameters of the 1983 Japan Sea earthquake sequence. J Phys Earth 36:155–184 Kato K, Takemura M, Ikeura T, Urao K, Uetake T (1992) Preliminary analysis for evaluation of local site effects from strong motion spectra by an inversion method. J Phys Earth 40:175–191. https://doi.org/10.4294/jpe1952.40.175 Klimasewski A, Sahakian V, Baltay A, Boatwright J, Fletcher JB, Baker LM (2019) κ0 and broadband site spectra in Southern California from source model-constrained inversion. Bull Seismol Soc Am 109(5):1878–1889. https://doi.org/10.1785/0120190037 Kubo H, Nakamura T, Suzuki W, Dhakal YP, Kimura T, Kunugi T, Takahashi N, Aoi S (2019) Ground-motion characteristics and nonlinear soil response observed by DONET1 seafloor observation network during the 2016 southeast off-Mie, Japan, earthquake. Bull Seismol Soc Am 109:976–986. https://doi.org/10.1785/0120170296 Morikawa N, Sasatani T (2003) Source spectral characteristics of two large intra-slab earthquakes along the southern Kurile-Hokkaido arc. Phys Earth Planet Inter 137:67–80. https://doi.org/10.1016/S0031-9201(03)00008-6 Nakamura R (2009) 3-D Attenuation structure beneath the Japanese islands, source parameters and site amplification by simultaneous inversion using short period strong motion records and predicting strong ground motion. Doctoral Dissertation, The University of Tokyo, 1–206 Nakamura T, Hayashimoto N (2019) Rotation motions of cabled ocean-bottom seismic stations during the 2011 Tohoku earthquake and their effects on magnitude estimation for early warnings. Geophys J Int 216:1413–1427. https://doi.org/10.1093/gji/ggy502 Nakamura R, Satake K, Toda S, Uetake T, Kamiya S (2006) Three-dimensional attenuation (Qs) structure beneath the Kanto district, Japan, as inferred from strong motion records. Geophys Res Lett 33:L21304. https://doi.org/10.1029/2006GL027352 Nakamura T, Takenaka H, Okamoto T, Ohori M, Tsuboi S (2015) Long-period ocean-bottom motions in the source areas of large subduction earthquakes. Sci Rep 5:16648. https://doi.org/10.1038/srep16648 Nakano K, Matsushima S, Kawase H (2015) Statistical properties of strong ground motions from the generalized spectral inversion of data observed by K-NET, KiK-net, and the JMA Shindokei Network in Japan. Bull Seismol Soc Am 105(5):2662–2680. https://doi.org/10.1785/0120140349 Nishizawa A, Uehira K, Mochizuki M (2022) Sediment distribution beneath S-net stations derived from multi-channel seismic reflection profiles and hypocenter determination using the sediment correction. Technical Note of the National Research Institute for Earth Science and Disaster Resilience, No. 471 (in Japanese with English abstract) Oth A, Bindi D, Parolai S, Giacomo DD (2010) Earthquake scaling characteristics and the scale-(in)dependence of seismic energy-to-moment ratio: insights from KiK-net data in Japan. Geophys Res Lett 37:L19304. https://doi.org/10.1029/2010GL044572 Oth A, Bindi D, Parolai S, Giacomo DD (2011) Spectral analysis of K-NET and KiK-net data in Japan, Part II: on attenuation characteristics, source spectra, and site response of borehole and surface stations. Bull Seismol Soc Am 101(2):667–687. https://doi.org/10.1785/0120100135 Ren Y, Wen R, Yamanaka H, Kashima T (2013) Site effects by generalized inversion technique using strong motion recordings of the 2008 Wenchuan earthquake. Earthq Eng Eng Vib 12:165–184. https://doi.org/10.1007/s11803-013-0160-6 Satoh T, Tatsumi Y (2002) Source, path, and site effects for crustal and subduction earthquakes inferred from strong motion records in Japan. J Struct Constr Eng (Transactions AIJ) 67(Issue 556):15–24. https://doi.org/10.3130/aijs.67.15_2 (in Japanese with English abstract) Sawazaki K, Nakamura T (2020) "N"-shaped Y/X coda spectral ratio observed for in-line-type OBS networks; S-net and ETMC: interpretation based on natural vibration of pressure vessel. Earth Planets Space 72:130. https://doi.org/10.1186/s40623-020-01255-6 Searle SR (1971) Linear models. John Wiley & Sons, Inc. Takagi R, Uchida N, Nakayama T, Azuma R, Ishigami A, Okada T, Nakamura T, Shiomi K (2019) Estimation of the orientations of the S-net cabled ocean-bottom sensors. Seismol Res Lett 90:2175–2187. https://doi.org/10.1785/0220190093 Tsuda K, Archuleta RJ, Koketsu K (2006) Quantifying the spatial distribution of site response by use of the Yokohama high-density strong-motion network. Bull Seismol Soc Am 96(3):926–942. https://doi.org/10.1785/0120040212 Umino N, Hasegawa A (1984) Three-dimensional Qs structure in the northeastern Japan arc. Zisin 37(2):217–228 (in Japanese with English abstract). https://doi.org/10.4294/zisin1948.37.2_217 Wessel P, Smith WHF (1998) New, improved version of generic mapping tools released. Eos Trans AGU 79:579. https://doi.org/10.1029/98EO00426 Yamanaka H (2005) Comparison of performance of heuristic search methods for phase velocity inversion in shallow surface wave method. J Environ Eng Geophys 10:163–173. https://doi.org/10.2113/JEEG10.2.163 Yamanaka H, Nakamura A, Kurita K, Seo K (1998) Evaluation of site effects by an inversion of S-wave spectra with a constraint condition considering effects of shallow weathered layers. Zisin 55:193–202. https://doi.org/10.4294/zisin1948.51.2_193 (in Japanese with English abstract) We would like to thank the Japan Meteorological Agency for providing us with hypocenter information for the earthquakes used in this study. We are thankful to the two anonymous reviewers for their constructive comments which helped us significantly improve the manuscript. We would also like to thank Wessel and Smith (1998) for providing us with Generic Mapping Tools, which were used to make figures in the paper. We extend our gratitude to Khagendra Acharya for English language editing. This study was supported by "Advanced Earthquake and Tsunami Forecasting Technologies Project" of NIED and JSPS KAKENHI Grant Number JP20K05055. National Research Institute for Earth Science and Disaster Resilience, Tsukuba, Japan Yadab P. Dhakal, Takashi Kunugi, Atsushi Wakai, Shin Aoi & Azusa Nishizawa Tokyo Institute of Technology, Tokyo, Japan Hiroaki Yamanaka Yadab P. Dhakal Takashi Kunugi Atsushi Wakai Shin Aoi Azusa Nishizawa Y.P.D. processed the strong-motion recordings, performed the spectral inversion, and drafted the manuscript. T.K. and S.A. conceptualized the study. A.W. prepared the site condition information at the land sites. H.Y. obtained the reference site amplification factors used in the inversion. A.N. contributed to relate the seismic survey results with the results of spectral inversion. All the authors discussed the contents and provided their comments. All authors read and approved the manuscript. Correspondence to Yadab P. Dhakal. Matrix equation used in the spectral inversion Velocity profiles at the reference sites on land Plots of focal mechanisms and stress drops of earthquakes J-SHIS velocity profiles and comparison of site amplification factors List of sites with spurious spectra Dhakal, Y.P., Kunugi, T., Yamanaka, H. et al. Estimation of source, path, and site factors of S waves recorded at the S-net sites in the Japan Trench area using the spectral inversion technique. Earth Planets Space 75, 1 (2023). https://doi.org/10.1186/s40623-022-01756-6 Site amplification Path effects Stress drops Spectral inversion S-net Ocean bottom seismographs Japan Trench Effects of Surface Geology on Seismic Motion (ESG): General State-of-Research
CommonCrawl
$93919 in 1992 → 2005 $93,919 in 1992 is worth $130,736.85 in 2005 $93,919 in 1992 has the same purchasing power as $130,736.85 in 2005. Over the 13 years this is a change of $36,817.85. The average inflation rate of the dollar between 1992 and 2005 was 2.61% per year. The cumulative price increase of the dollar over this time was 39.20%. The value of $93,919 from 1992 to 2005 So what does this data mean? It means that the prices in 2005 are 1,307.37 higher than the average prices since 1992. A dollar in 2005 can buy 71.84% of what it could buy in 1992. These inflation figures use the Bureau of Labor Statistics (BLS) consumer price index to calculate the value of $93,919 between 1992 and 2005. The inflation rate for 1992 was 3.01%, while the inflation rate for 2005 was 3.39%. The 2005 inflation rate is higher than the average inflation rate of 2.41% per year between 2005 and 2021. USD Inflation Since 1913 The chart below shows the inflation rate from 1913 when the Bureau of Labor Statistics' Consumer Price Index (CPI) was first established. The Buying Power of $93,919 in 1992 We can look at the buying power equivalent for $93,919 in 1992 to see how much you would need to adjust for in order to beat inflation. For 1992 to 2005, if you started with $93,919 in 1992, you would need to have $130,736.85 in 1992 to keep up with inflation rates. So if we are saying that $93,919 is equivalent to $130,736.85 over time, you can see the core concept of inflation in action. The "real value" of a single dollar decreases over time. It will pay for fewer items at the store than it did previously. In the chart below you can see how the value of the dollar is worth less over 13 years. Value of $93,919 Over Time In the table below we can see the value of the US Dollar over time. According to the BLS, each of these amounts are equivalent in terms of what that amount could purchase at the time. Dollar Value $93,919.00 3.01% $102,018.93 2.83% US Dollar Inflation Conversion If you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of 39.20%. Equivalent Value $1.00 in 1992 $1.39 in 2005 $10.00 in 1992 $13.92 in 2005 $100.00 in 1992 $139.20 in 2005 $1,000.00 in 1992 $1,392.02 in 2005 $10,000.00 in 1992 $13,920.17 in 2005 $100,000.00 in 1992 $139,201.71 in 2005 $1,000,000.00 in 1992 $1,392,017.11 in 2005 Calculate Inflation Rate for $93,919 from 1992 to 2005 To calculate the inflation rate of $93,919 from 1992 to 2005, we use the following formula: $$\dfrac{ 1992\; USD\; value \times CPI\; in\; 2005 }{ CPI\; in\; 1992 } = 2005\; USD\; value $$ We then replace the variables with the historical CPI values. The CPI in 1992 was 140.3 and 195.3 in 2005. $$\dfrac{ \$93,919 \times 195.3 }{ 140.3 } = \text{ \$130,736.85 } $$ $93,919 in 1992 has the same purchasing power as $130,736.85 in 2005. To work out the total inflation rate for the 13 years between 1992 and 2005, we can use a different formula: $$ \dfrac{\text{CPI in 2005 } - \text{ CPI in 1992 } }{\text{CPI in 1992 }} \times 100 = \text{Cumulative rate for 13 years} $$ Again, we can replace those variables with the correct Consumer Price Index values to work out the cumulativate rate: $$ \dfrac{\text{ 195.3 } - \text{ 140.3 } }{\text{ 140.3 }} \times 100 = \text{ 39.20\% } $$ Inflation Rate Definition The inflation rate is the percentage increase in the average level of prices of a basket of selected goods over time. It indicates a decrease in the purchasing power of currency and results in an increased consumer price index (CPI). Put simply, the inflation rate is the rate at which the general prices of consumer goods increases when the currency purchase power is falling. The most common cause of inflation is an increase in the money supply, though it can be caused by many different circumstances and events. The value of the floating currency starts to decline when it becomes abundant. What this means is that the currency is not as scarce and, as a result, not as valuable. By comparing a list of standard products (the CPI), the change in price over time will be measured by the inflation rate. The prices of products such as milk, bread, and gas will be tracked over time after they are grouped together. Inflation shows that the money used to buy these products is not worth as much as it used to be when there is an increase in these products' prices over time. The inflation rate is basically the rate at which money loses its value when compared to the basket of selected goods – which is a fixed set of consumer products and services that are valued on an annual basis. <a href="https://studyfinance.com/inflation/us/1992/93919/2005/">$93,919 in 1992 is worth $130,736.85 in 2005</a> "$93,919 in 1992 is worth $130,736.85 in 2005". StudyFinance.com. Accessed on January 22, 2022. https://studyfinance.com/inflation/us/1992/93919/2005/. "$93,919 in 1992 is worth $130,736.85 in 2005". StudyFinance.com, https://studyfinance.com/inflation/us/1992/93919/2005/. Accessed 22 January, 2022 $93,919 in 1992 is worth $130,736.85 in 2005. StudyFinance.com. Retrieved from https://studyfinance.com/inflation/us/1992/93919/2005/. Quick Inflation Calculations Inflation of $1 from 1992 to 2005 Inflation of $10 from 1992 to 2005 Inflation of $100 from 1992 to 2005 Inflation of $1,000 from 1992 to 2005 Inflation of $10,000 from 1992 to 2005 Inflation of $100,000 from 1992 to 2005 Inflation of $1,000,000 from 1992 to 2005 Random Inflation Calculations Inflation of $83,893 from 1961 to today
CommonCrawl
Impacts of forestation on the annual and seasonal water balance of a tropical catchment under climate change Hero Marhaento ORCID: orcid.org/0000-0001-5601-62611, Martijn J. Booij2, Noorhadi Rahardjo3 & Naveed Ahmed4 This study aims to assess the effects of a forestation program and climate change on the annual and seasonal water balance of the Bogowonto catchment (597 km2) in Java, Indonesia. The catchment study is rare example in Indonesia where forestation has been applied at the catchment level. However, since the forestation program has been initiated, evaluations of the program only focus on the planting area targets, while the environmental success e.g., impacts on the hydrological processes have never been assessed. This study used a calibrated Soil and Water Assessment Tool (SWAT) model to diagnose the isolated and combined effects of forestation and climate change on five water balance components, namely streamflow (Q), evapotranspiration (ET), surface runoff (Qs), lateral flow (Ql) and base flow (Qb). The results show that from 2006 to 2019, forest cover has increased from 2.7% to 12.8% of the total area, while in the same period there was an increase in the mean annual and seasonal temperature, rainfall, and streamflow. Results of SWAT simulations show that changes in the mean annual and seasonal water balance under the forestation only scenario were relatively minor, while changes were more pronounced under the climate change only scenario. Based on the combined impacts scenario, it was observed that the effects of a larger forest area on the water balance were smaller than the effects of climate change. Although we found that forestation program has minor impacts compared to that of climate change on the hydrological processes in the Bogowonto catchment, seasonally, forestation activity has decreased the streamflow and surface runoff during the wet season which may reduce the risk of moderate floods. However, much attention should be paid to the way how forestation may result in severe drought events during the dry season. Finally, we urge the importance of accounting for the positive and negative effects in future forestation programs. Water availability in a catchment is influenced by both climate change and land use change (Romanowicz and Booij 2011; Wohl et al. 2012). However, both factors likely operate at different spatial levels (DeFries and Eshleman 2004). Land use change impacts on hydrological processes are likely more pronounced at the local scale (Bosch and Hewlett 1982; Wohl et al. 2012; Gallo et al. 2015; Marhaento et al. 2017b; Marhaento et al. 2021), while effects of climate change on hydrological processes are found to be more significant at large spatial scales (> 100 km2) (Blöschl et al. 2007; Wohl et al. 2012; Beck et al. 2013). Combinations of land use and climate changes may not only result in accelerating effects on the water balance (Khoi and Suetsugi 2014; Marhaento et al. 2018), but may also offset each other (Zhang et al. 2016). Although it is evident that interactions between land use change and climate change may be operative at the catchment level, the extents and directions of the changes in the water balance are not well understood (Blöschl et al. 2007; Romanowicz and Booij 2011; Wohl et al. 2012; Marhaento et al. 2021). For tropical regions including Indonesia, land use and climate in the future are generally characterized by continuous deforestation, an increase in the mean temperature, and changes in the spatial and temporal rainfall variability (Nobre et al. 2016). As a result, there will be an increased frequency of disastrous events (e.g., droughts and floods) for this region (IPCC 2012). It is generally agreed that deforestation may significantly reduce canopy interception and soil infiltration capacity resulting in an increase of surface runoff (Bruijnzeel 1989, 2004; Ogden et al. 2013; Marhaento et al. 2017b). With the influence of climate change (particularly changes in temperature and rainfall patterns), the combined impacts on hydrological processes are more pronounced than individual impacts of land use change or climate change (Legesse et al. 2003; Hejazi and Moglen 2008; Khoi and Suetsugi 2014). Marhaento et al. (2018) simulated individual and combined impacts of land use change and climate change on hydrological processes in the Samin catchment (278 km2) in Java, Indonesia and found that both land use change and climate change contribute to changes in the water balance components, but each driver has a specific contribution to the water balance alteration. Land use change likely contributes to changes in annual evapotranspiration, while climate change rather contributes to changes in annual base flow (Khoi and Suetsugi 2014; Marhaento et al. 2017b; Marhaento et al. 2018). Combinations of the two drivers may result in more pronounced changes in annual streamflow and surface runoff. In order to mitigate future risks associated with land use change (i.e., deforestation) and climate change, increasing global and regional forest cover through forestation program has been widely promoted. There is a widespread agreement in the community that planting large areas of trees may increase a more evenly spread water balance in time (i.e., wet and dry seasons), which supports the mitigation of floods during the rainy season and of drought during the dry season (Bosch and Hewlett 1982; Bruijnzeel 2004; Brown et al. 2005; Suryatmojo et al. 2011; Marhaento et al. 2019). However, although a reforestation program is considered as a long-term process with long-term benefits, existing evaluations of the success of these programs tend to focus on short-term success indicators such as planting area targets (Le et al. 2012). To date, only few evaluations have measured the impacts of forestation projects on the environment, even though restoring ecosystem functions (e.g., nutrient recycling, primary production, decomposition of dead matter) and ecosystem services (e.g., food, water, oxygen) are always stated as the main objective of forestation (Sala et al. 2000). The latest review from Bentley and Coomes (2020) shows that forestation programs may have been linked with reduced river flow and potentially detrimental effects to downstream areas. Their meta-analysis for hundreds of catchments revealed that in general forestation reduces annual river flow (by 23% after 5 years and 38% after 25 years) with greater reductions in catchments with higher mean annual precipitation and larger increases in forest cover. In addition, they argue that the impact of forests on river flow is sensitive to annual precipitation and potential evapotranspiration, but responses are highly variable due to climate change, where the role of climate change is still unexplored requiring further study. This study aims to assess the impacts of forestation on the annual and seasonal water balance of a tropical catchment under climate change conditions. The Bogowonto catchment (597 km2) on Java Island, Indonesia is selected as location of study because this catchment is a rare example in Indonesia where forestation has been applied at the landscape level. In this study, a modelling approach was used to achieve the research objective. A calibrated and validated Soil and Water Assessment Tool (SWAT) (Arnold et al. 1998) was used to simulate hydrological processes in the Bogowonto catchment. While most studies of hydrological processes under changing conditions (i.e., climate and land use) were mainly focused on assessing the effects of deforestation, less attention has been given to the impacts of forestation programs. Through this study, we want to investigate the long-term impacts of the forestation program in the Bogowonto catchment, which has been executed since early 2000, under climate change conditions. In order to achieve the study objective, two relevant questions are addressed: a) what is the effect of forestation in the Bogowonto catchment on the water balance under climate change conditions? and (b) what is the long-term trajectory of water availability in the Bogowonto catchment following forest establishment? Although this research is conducted in a single catchment (i.e., Bogowonto catchment), it is thought to represent problems characteristic for the hydrology in tropical catchments having forestation programs. A better understanding will give insight in the potential effects of forestation programs on water availability at catchment scale. Study area and data availability Catchment description Bogowonto is one of the major rivers in Central Java Province, Indonesia and plays an important role in supporting life within its surrounding area. It is located in the southern part of Central Java Province, shared by 4 districts namely: Purworejo, Magelang, Kebumen and Kulon Progo, but a large part is located in the Purworejo District. The river length is around 67 km with a catchment area of about 597 km2. Geographically, it is located between latitude 7°23′–7°54′ South and longitude 109°56′–109°10′ East, where the highest part of the catchment is located on the Sumbing Mountain with an altitude of 3278 m above mean sea level (a.m.s.l.) and the catchment outlet is located close to the Indian Ocean with an altitude of 26-m a.m.s.l., as shown in Fig. 1. Map of Bogowonto catchment area with locations of rainfall, meteorological and river gauges The Bogowonto catchment area has a diverse topography ranging from plain (0–8%) in the downstream part and very steep slopes (> 45%) in the upstream part occupying more than 25% of the area. There are four soil types where two types are dominant namely vertic luvisols (30.6%) and lithosols (47.1%). Vertic luvisols is a tropical soil mostly used by small farmers because of its ease of cultivation and no great impediments (FAO 2001). With base saturation > 50%, this soil is greatly affected by water erosion and loss in fertility since nutrient deposits are concentrated in the topsoil. Lithosols are typical thin soils often found in steep hilly or mountainous regions where erodible material is rapidly removed by erosion (FAO 2001). Figure 2 shows the slope and soil maps of the Bogowonto catchment. Slope map (a) and soil map (b) of Bogowonto catchment To set up the hydrological model, spatial and non-spatial data were used. For the spatial data, land use maps for the years 2006 and 2019 were available for the study area from the Ministry of Forestry (MoF). It is freely accessible (with a permission) at the scale of 1:250,000. Furthermore, field visits were carried out to validate the land cover map classification. The Digital Elevation Model (DEM) of the study was generated from DEMNAS (http://tides.big.go.id/DEMNAS/), which was made available through the Geospatial Information Agency of Indonesia at around 8-m spatial resolution. A soil map at 30 arc-second spatial resolution was taken from the Harmonized World Soil Database (FAO/IIASA/ISRIC/ISSCAS/JRC 2012). For the non-spatial data, daily rainfall (R) from 12 rainfall ground stations located within the vicinity of the Bogowonto catchment was provided by the Serayu Opak River Basin Organization. However, the data only covered the period 2002–2011 and contained missing values for almost 10% of the data. Since the available R data from the ground stations were not sufficient for the analysis, a grid-based daily R dataset from the Climate Hazards Group Infrared Precipitation with Station data (CHIRPS) for the period 2000–2019 were used. CHIRPS is a 30+ year quasi-global (50° S–50° N) daily R dataset with 0.05° spatial resolution and is available from 1981 to present (Funk et al. 2015). It has been applied and validated in many hydrological simulations across various regions and it has been suggested that this satellite product can be applied to data-scarce locations (Tuo et al. 2016; Beck et al. 2017; Paredes-Trejo et al. 2017). We corrected the CHIRPS dataset using the ground R stations data with a simple scaling method where we calculated monthly correction factors based on the ratio of monthly satellite-based R values to monthly ground-based R values (Katiraie-Boroujerdy et al. 2020). Meteorological data other than R data in the study area were made available from a single meteorological station (i.e., Kradenan station). Similarly, to the rainfall dataset, it was only available for the period 2002–2011 with missing values for almost 20% of the data. For this reason, we used satellite-based meteorological data for the analysis. We used minimum and maximum daily temperature (Tmin and Tmax) data from the National Aeronautics Space Administration (NASA) Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset with a 0.25° spatial resolution on a daily basis (Thrasher et al. 2012). The NEX-GDDP products have been cited to be a promising source of climatic data as input for hydrological models at regional and local scales (Bokhari et al. 2018; Song et al. 2020) including the South East Asia region (Nauman et al. 2019). For this study, we applied the multi-model averaging concept (i.e., ensemble) for Tmin and Tmax from the NEX-GDPP dataset for the period 2000–2019. Subsequently, we corrected the temperature (T) dataset using the ground meteorological station data using, again, the simple scaling method. For the model calibration, monthly streamflow (Q) data were provided by the Serayu Opak River Basin Organization for the period 2002–2015. The reliability of the Q data was ensured through data screening and a visual check of the hydrograph. Figure 3 shows the observed mean annual R and Q of the Bogowonto River. Mean annual rainfall and streamflow for the period 2002–2015 in the Bogowonto catchment Land use change and hydro-climatic trend analysis Field visits were carried out to collect information about the land use classes such as trees and crops species as well as settlement patterns. The land use information together with the land use map produced by MoF were used to determine the SWAT land use database. Trend analysis was carried out to check whether the continuous time-series of annual and seasonal hydro-climatic variables of the Bogowonto catchment have significantly changed over time (long-term). We used the Mann-Kendall statistical test to detect trends in the annual and seasonal Tmax, Tmin, average temperature (Tav) and R for the period 2000–2019 and Q for the period 2002–2015 and employed Sen's slope estimator (Sen 1968) to determine the magnitude of the trend. The Mann-Kendall statistical test and Sen's slope estimation were selected since they have been widely used to detect trends in long-time series of hydrological and climatological data (Rientjes et al. 2011; Zhang et al. 2014; Marhaento et al. 2017a). SWAT model set up This study used the Soil Water Assessment Tool (SWAT) model (Arnold et al. 1998) to simulate hydrological processes of the study catchment. It is a semi-distributed model operating on a daily time step with proven suitability for hydrologic impact studies around the world including South East Asia region (Khoi and Suetsugi 2014; Marhaento et al. 2017b; Marhaento et al. 2018; Tarigan et al. 2018). The water balance in the SWAT model includes inflows, outflows and variations in storages (Arnold et al. 1998). R is the main inflow in the model. The outflows are actual evapotranspiration (ET), surface runoff (Qs), lateral flow (Ql) and base flow (Qb). There are four water storage possibilities in SWAT namely snowpack, soil moisture, shallow aquifer, and deep aquifer. However, we excluded snowpack storage because snowfall is not relevant in the study catchment. Flows between storages are percolation from the soil moisture storage to the shallow aquifer storage, capillary rise from the shallow aquifer to the soil moisture storage, and deep aquifer recharge. Q is the sum of Qs, Ql, and Qb. For a more detailed description of the SWAT model, reference is made to Neitsch et al. (2011). The model set-up was started by delineating the catchment boundaries and dividing the catchment into sub-catchments based on the DEM data. To do so, we used a stream network map from the Indonesia Geospatial Information Agency to "burn-in" the simulated stream network from SWAT to create accurate flow routing. It resulted in 13 sub-catchments, ranging in size from 3.6 to 90.9 km2. In addition, the DEM was used to generate a slope map with five classes namely 0–8% (flat), 8%–15% (moderate), 15%–25% (moderate steep), 25%–45% (steep), and > 45% (very steep). According to the land cover map from MoF, land use in the Bogowonto catchment consist of eight classes, namely: forest, plantation, dryland farming, paddy field, shrub, bareland, settlement and water body. Then, these land use classes were given codes from the SWAT database, namely FRST, AGRC, AGRR, RICE, RNGB, BARR, URMD and WATR, respectively. In the SWAT land use database, there are several options to define settlements. In this study, we chose the class Urban Residential Medium Density (URMD) to assign the settlement area due to the conditions that the settlements in the study area are not fully impervious providing some pervious spaces in between the houses that are often used for house yards. URMD assumes an average of 38% impervious area in the settlement area (Neitsch et al. 2011), which is relatively similar to the settlement conditions in the study catchment. Soil characteristics of four soil types were taken from the Harmonized World Soil Database (HWSD) (FAO/IIASA/ISRIC/ISSCAS/JRC 2012). The soil characteristics required as SWAT input that were not available in HWSD such as available water content, saturated hydraulic conductivity and bulk density, were obtained from the Soil-Plant-Atmosphere-Water (SPAW) model (Saxton and Willey 2005). This soil model uses pedotransfer functions including information on soil texture, soil salinity, organic matter, gravel and soil compaction to determine water retention characteristics (Saxton and Willey 2005). HRUs were created by spatially overlying maps of land use, soil and slope classes. A temperature-based evapotranspiration method namely the Hargreaves method was used to calculate potential evapotranspiration (ETo). The actual evapotranspiration (ET) then was simulated based on the calculated ETo, water availability in the soil and plant characteristics. For runoff simulations, the Soil Conservation Service Curve Number (SCS-CN) method adjusted for slope effects was selected because it has a direct link to land use types and assumes an average slope of more than 5% (Williams 1995). For flow routing, the Muskingum method that models the storage volume as a combination of wedge and prism storage was used (Neitsch et al. 2011). After completing the model set-up, a hydrological simulation was run from 2000 to 2019 including 2 years "warming-up" period. Model calibration and validation Model calibration and validation aim to produce a robust SWAT model. In this study, the available monthly Q data from 2002 to 2015 were split into two periods: 2002–2010 (i.e., calibration period) and 2011–2015 (i.e., validation period). We followed the procedure from Abbaspour et al. (2015) to calibrate the model. First, a simulation was executed using the default SWAT parameters. Second, the resulting hydrograph was visually compared with the observed hydrograph. Third, based on the characteristics of the differences between observed and simulated hydrographs (e.g., underestimation or overestimation of Q, shifted Q), relevant SWAT parameters were identified. Fourth, one-at-a-time sensitivity analysis was carried out to identify the most sensitive parameters among the relevant parameters (Abbaspour et al. 2015; Marhaento et al. 2017b, 2018). Finally, the selected sensitive parameters were calibrated. We chose to follow the calibration procedure from Abbaspour et al. (2015) because they provide a general protocol for SWAT model calibration which helped to select the appropriate parameters to be calibrated and thus shorten the parameterization time. We have used the Latin Hypercube Sampling approach from the Sequential Uncertainty Fitting version 2 (SUFI-2) in the SWAT-Calibration and Uncertainty Procedure (SWAT-CUP) package to calibrate the selected parameters. First parameter ranges were determined based on minimum and maximum values allowed in SWAT. A number of iterations were performed where each iteration consisted of 1000 simulations with narrowed parameter ranges in subsequent calibration rounds. We stopped the calibration when the objective function value did not significantly change anymore in subsequent iterations. In this study, evaluations of model calibration were carried out on a monthly basis and the Kling-Gupta Efficiency (KGE; Gupta et al. 2009) (Eq. 1) was used as the objective function. We chose KGE as objective function since it combines the three components of the Nash-Sutcliffe efficiency (NSE) (i.e., correlation, bias, ratio of variances) in a balanced way (Liu 2020). Moreover, it has been widely used for calibration and evaluation of hydrological models in recent years (Pool et al. 2018; Knoben et al. 2019). The model performs well when the KGE value is close to 1. $$ \mathrm{KGE}=1-\sqrt{{\left(r-1\right)}^2+{\left(\frac{\sigma_{\mathrm{sim}}}{\sigma_{\mathrm{obs}}}-1\right)}^2+{\left(\frac{\mu_{\mathrm{sim}}}{\mu_{\mathrm{obs}}}-1\right)}^2} $$ where r is the linear correlation coefficient between the observed and the simulated data set, σobs is the standard deviation of the observations, σsim the standard deviation of the simulations, μsim the simulated mean, and μobs the observed mean. Assessing the impacts of forestation under varying climatic conditions To assess the effect of forestation and climate change on the hydrological processes, we used the one-factor-at-a-time method (Li et al. 2009; Lyu et al. 2019). In this method, meteorological data of 2002–2019 excluding 2 years warming-up period were equally split representing the baseline period (i.e., 2002–2010) and a change period (i.e., 2011–2019). The land use maps of 2006 and 2019 were used to represent the land use conditions in the two time periods, respectively. Furthermore, four simulations were carried out using the calibrated SWAT model: a combination of the land-use map for 2006 with the climate data for 2002–2010 (S1), the land use map for 2019 with the climate data for 2002–2010 (S2), the land use map for 2006 with the climate data for 2011–2019 (S3), and the land-use map for 2019 with the climate data for 2011–2019 (S4). Scenario S1 was regarded as the baseline condition. Scenarios S2 and S3 minus scenario S1 can be used to determine the impacts of individual land use change (i.e., forestation) and climate change, respectively. Scenario S4 minus scenario S1 can be used to determine the combined impacts of land use change (forestation) and climate change on hydrological processes. Finally, we diagnosed changes in five annual and seasonal water balance components, namely Q, ET, Qs, Ql and Qb for each scenario. The annual water balance was calculated based on the annual mean, while the seasonal water balance was calculated based on the accumulation of each water balance component in December–January-February (DJF), March–April–May (MAM), June–July-August (JJA), and September–October-November (SON). The DJF period represents the wettest period (wet season) of the year, while the JJA period represents the driest period (dry season) of the year. MAM and SON periods represent transition periods from wet to dry season and from dry to wet season, respectively. These annual and seasonal water balance components have been used as indicators of land use change and climate change impacts on hydrological processes in a tropical catchment (Marhaento et al. 2017b; Marhaento et al. 2019). Forestation in the Bogowonto catchment According to the land use map from the MoF, land use in the Bogowonto catchment in 2019 was mostly dominated by forest (12.76%) and agriculture area (i.e., dryland farming, 77.2%). Forest area mostly covered the upper and middle part of the catchment (see Fig. 4). Based on the ownership, there are two types of forests in the Bogowonto catchment. The first one is state forest which is associated with soil-water protected areas and mostly located at elevations higher than 2000 m, a slope ≥ 45° and mostly occupied with homogenous evergreen trees like Pinus merkusii, and Schima wallichii. This forest type is dominant in the upper part of the catchment on the slopes of the Sumbing Mountain. The second one is private forest that is owned by the public, or commonly called community forest. This forest type adopts an agroforestry system, which is a planting system dominated by multipurpose trees (e.g., fruits and woods), often combined with seasonal crops on the same unit of land. Swietenia mahagoni, Paraserianthes falcataria, and Tectona grandis are the most tree species planted for wood production, while for fruit production, the most frequently occurring tree species planted are Durio sp., Mangifera indica, and Cocos nucifera. This community forest can mainly be found in the middle part of the catchment. The agriculture area in the Bogowonto catchment is mainly dominated by dry land farming. This land use type is spread over the catchment area, including the upland area, and used for the production of seasonal crops (palawija) like maize, peanuts, soya beans, and chili. In the downstream area, the land use is mainly settlements and paddy field area. Bare land and shrubs are abandoned areas where the land is not available for agricultural purposes (i.e., critical land) and was mostly located in the up and middle part of the catchment in hilly regions. Land use maps for 2006 (a) and 2019 (b) in the Bogowonto catchment In 2006, according to the land use map from the MoF, land use in the Bogowonto catchment was also dominated by forest and agriculture area (i.e., dry land farming), but with different relative areas compared to 2019. Forest only included about 2.7% of the catchment area, while dry land farming occupied 92.3% of the total area. Forests in 2006 only occupied a small part of the upstream catchment inside a state forest. However, in 2019, forest has been largely spread in the upper and middle part of the catchment. Apparently, during the last 13 years, the forestation through the forest and land rehabilitation program initiated by the MoF and the development of community forests in the Bogowonto catchment has been successfully implemented and thus significantly increased the forest cover by around 10.1% of the total area. Table 1 shows the changes in land use from 2006 to 2019 in the Bogowonto catchment, while Fig. 4 shows the distribution of land use classes for 2006 and 2019 in the Bogowonto catchment. Table 1 Land use distribution in 2006 and 2019 in the Bogowonto catchment Hydro-climatic trends and magnitudes Table 2 shows the results of the Mann-Kendall test and Sen's slope estimator of trends in Tmax, Tmin, Tav, R and Q for both seasonal and annual time scale were statistically not significant, except for Tmax during DJF and SON periods. In these periods, it was observed that there was a significant increase (p-value > 0.5) in the maximum temperature. Table 2 Results of statistical trend analysis for annual and seasonal maximum temperature (Tmax), minimum temperature (Tmin), average temperature (Tav), rainfall (R), and streamflow (Q) of the Bogowonto catchment for the period 2000–2019 Ten SWAT parameters related to groundwater flow (i.e., SHALLST, GWHT, and GW_DELAY), flow routing (i.e., CH_K2), surface runoff (i.e., CN2 and SLSUBBSN), evapotranspiration (i.e., ESCO), and soil infiltration (i.e., SOL_AWC, SOL_BD, and SOL_K) were calibrated. We refer to Neitsch et al. (2011) for a more detailed description of these SWAT parameters. Table 3 shows the calibrated values of the selected SWAT parameters. Figure 5 shows the observed and simulated hydrograph for the calibration and validation periods. Table 3 Values of calibrated Soil Water Assessment Tool parameters Observed and simulated hydrograph for the calibration period (2002–2010) and validation period (2011–2015) The results of the model calibration show that the simulated mean monthly Q in the calibration period (2002–2010) agrees well with the observed records with a KGE value of 0.79. However, in the validation period (2011–2015), the KGE model performance decreases to 0.74. Changes in water balance Annual water balance Table 4 shows the results of the simulated annual water balance components for all four hypothetical scenarios. Compared with the baseline scenario (S1), a significant change of the annual Q occurred under the climate change scenario (S3) with an increase of 104.8 mm (5.2%). Forestation activities (S2) apparently did not significantly change the annual Q as it caused only a 23.6 mm (1.1%) decrease, while the combined effect of forestation and climate change (S4) has increased the annual Q by 79.5 mm (3.9%). These results indicated that both forestation and climate change have increased Q, with a larger contribution from climate change than that of forestation activity. For the annual ET, all scenarios resulted in an increase of ET compared to that in the baseline scenario. The largest increase occurred under the combined forestation and climate change scenario (S4) followed by S3 and S2 scenarios, respectively. After forestation, the evaporative demand was larger than in the baseline period (i.e., 4% increase). However, the ET rate was doubled under the climate change scenario (S2) with a 10.5% increase, and much larger under the combined scenario (S4) with a 15% increase. These results showed that there was actually a large evaporative demand in the catchment. Table 4 Simulated average annual water balance components under different climate and land use conditions Besides affecting the outflows, forestation and climate change also changed the fractions of Q. Most significant changes occurred in the Ql and Qb under the combined effects of forestation and climate change scenario (S4), where the components increased by 18.9 mm (12.7%) and 91.8 mm (10.1%), respectively. For Qs, the forestation activity scenario (S2) significantly decreased Qs compared to that of climate change (S3 scenario). Under the combined effects of forestation and climate change scenario (S4), Qs decreased by 38.4 mm (4.2%). These results show that climate change in terms of an increase of R has largely affected the fraction of Q becoming Ql and Qb, while forestation activity mostly controlled the amount of Qs occurring in the study catchment. Figure 6 shows the changes in the mean annual water balance components under different scenarios compared to the baseline scenario. Change in the mean annual water balance components namely streamflow (Q), evapotranspiration (ET), surface runoff (Qs), lateral flow (Ql) and base flow (Qb) under different scenarios compared to the baseline scenario Seasonal water balance Table 5 shows the simulated seasonal water balance components for all four hypothetical scenarios. It was observed that under the forestation scenario (S2), changes in seasonal Q were relatively minor, although it showed a consistent decrease throughout the months. Changes in Q were significant under the climate change scenario (S3), in particular during the wet months (i.e., DJF) with an increase of 129.3 mm (13.9%), while it decreased by 39.6 mm (17.1%) in SON months. Under the combined forestation and climate change scenario (S4), Q in DJF months increased by 117 mm (12.6%) and decreased by 50.3 mm (21.8%) SON months. These results showed that seasonal Q in the Bogowonto catchment has been mainly affected by climate change, where forestation activity contributed to amplify water loss during the dry season, but reduced Q during the wet season. For ET, an increase of ET in all seasons under all scenarios was observed. However, a significant increase of ET occurred under the combined forestation and climate change scenario (S4), with the largest changes occurring in the dry season (i.e., JJA months) with an increase of 45.6% (+ 20.9 mm) followed by SON (15.3%, + 15.2 mm), MAM (13.2%, + 33.1 mm), and DJF (10.1%, + 20.5 mm). These results showed that the forestation activity significantly increased the ET rate of the Bogowonto catchment in particular during the dry season and thus potentially caused more severe drought periods in the study catchment. Table 5 Simulated average seasonal water balance components under different climate and land use The Q components Qb and Ql significantly increased under the combined effects of forestation and climate change scenario (S4) especially during the wet seasons (DJF and MAM). In the DJF months, Qb and Ql increased by 74.1 mm (19.8%) and 13.3 mm (18.7%), while during the MAM months, Qb and Ql increased by 32.8 mm (8.2%) and 4.8 mm (9.7%), respectively, compared to the baseline scenario. However, a more pronounced impact of the combined forestation and climate change scenario occurred during the dry months (SON), where Qb decreased by 20.4 mm (34.7%) compared to the baseline scenario. Qs decreased in all periods under the forestation scenario (S2) with a pronounced decrease only in DJF by 16.9 mm (3.5%). However, under the climate change scenario (S3), there was an increase in Qs of 45.2 mm (9.5%) during the wet season (DJF). For the combined effect of forestation and climate change scenario (S4), Qs increased by 27 mm (5.7%) in the DJF months, while in the other periods, Qs significantly decreased by 27.4 mm (11.2%) in the MAM months, by 8.7 mm (19.4%) in the JJA months, and by 29.3 mm (20.1%) in the SON months, compared to the baseline scenario. These results showed that during the wet months, forestation activity significantly reduced Qs and increased Qb and Ql. However, in the dry months, both forestation and a drier climate resulted in a significant water loss in the catchment. Figure 7 shows the changes in the mean seasonal water balance components under different scenarios compared to the baseline condition. Change in the mean seasonal water balance components, streamflow (Q), evapotranspiration (ET), surface runoff (Qs), lateral flow (Ql) and base flow (Qb), under different scenarios compared to the baseline scenario Based on hydrological model simulations, this study shows that changes in the annual and seasonal water balance components of the study catchment can be attributed to both land use change (i.e., forestation) and climate change (i.e., an increase of R and T). Under the forestation only scenario, it was observed that the presence of forests has increased mean annual and seasonal ET and at the same time reduced the mean annual Q. In addition, it decreased the mean annual and seasonal Qs, while the mean Ql and the Qb increased. It is widely known that forestation is associated with a decrease in annual Q, primarily as a result of increasing transpiration and interception rates since trees are generally known to have higher ET rates than other land uses (Bosch and Hewlett 1982; Bruijnzeel 1989, 2004; Brown et al. 2005; Marhaento et al. 2018; Bentley and Coomes 2020). In addition, a larger vegetated area as a result of successful forestation activity generally leads to an increase in the water storage capacity of the soil due to greater root penetration resulting in a larger infiltration rate and ground water recharge (Bruijnzeel 1989, 2004; Guevara-Escobar et al. 2007). Thus, the fraction of the Q originating from Qs has significantly decreased. The directions of changes in the water balance by forestation activity in this study are in line with other hydrological studies in tropical regions from Bruijnzeel (2004), Valentin et al. (2008), Remondi et al. (2016) and Marhaento et al. (2017a). However, it should be noted that in this study the changes of the mean annual and seasonal water balance components under the forestation only scenario was relatively minor. It was observed that the changes of water balance components were more pronounced under the climate change only scenario, indicating changes in the mean annual and seasonal R and T may have large impacts on the water availability of the Bogowonto catchment. We found that an increase in the mean annual R may result in a large increase in the Q and ET. In addition, the Q components Qb and Ql significantly increased, while changes in the Qs were relatively minor. Apparently, small increases in mean annual R (statistically not significant) are likely to have large impacts on the water balance of the study catchment. Our results are in line with the findings of Legesse et al. (2003), Khoi and Suetsugi (2014), and Shi et al. (2013), who argue that climate change (i.e., in T and R) has a larger influence on water availability than land use changes. However, it should be noted that climate change impacts on water availability vary depending on the spatial scale, due to direct and indirect influences through feedback mechanisms (Pielke 2005; Milly et al. 2005; Blöschl et al. 2007). In addition, it was observed that changes in annual and seasonal ET can likely be attributed to changes in annual and seasonal R, where the variations in ET follow the variations in R. Thus, an increase in ET is found during the wet seasons (DJF and MAM), while a decrease in ET is found during the dry seasons (JJA and SON). We agree with Budyko (1974), who argues that changes in ET are determined by the balance between R and evaporative demands. Under the combined forestation and climate change scenario, we found a relationship between changes in water balance components in response to forest establishment and climate change. However, it is observed that the magnitude of changes in the annual and seasonal water balance is mainly determined by climate change, whereby the directions of change in annual and seasonal water balance under the combined scenario are similar to those under the climate change only scenario. Apparently, with a 10.1% increase of forest area in the study catchment, it results in small changes in the annual Q and its flow components. The presence of a larger forest area which can result in more subsurface flow (i.e., Ql and Qb) due to an enhanced groundwater recharge are offset by climate change. Despite minor effects on the overall water balance, it was found that forestation activity has decreased the Q caused by much less Qs during the wet season. Thus, it may reduce the risk of moderate floods (Bruijnzeel 1989, 2004; Ibanez et al. 2002; Ogden et al. 2013; Remondi et al. 2016). However, on the other hand, the larger forested area due to forestation has negative impacts in the dry season. It was found that in the SON period, the ET rate slightly reduced while Q significantly decreased, mainly caused by a decrease in the Qs and Qb. In the dry season, the ET capacity is mainly determined by the antecedent soil moisture and influenced by different land cover types (Liu et al. 2011). With a significant decrease of R during the dry months and at the same time a rise in T, soil moisture has been soaked up to fulfil evaporative demand (Calder 1998; Calder et al. 2001; Bentley and Coomes 2020). Our simulation results show the potential importance of accounting for positive and negative effects in future forestation programs. Although forest establishment widely showed increased rates of ground water recharge as a result of increased infiltration (Ilstedt et al. 2016; Remondi et al. 2016; Marhaento et al. 2017a, b) and potentially result in a more balanced distribution of Q between the dry and wet season (Marhaento et al. 2019), the actual behaviour depends on many interacting factors. Marhaento et al. (2019) found that besides the percentage of forest area that obviously affect the water balance, the spatial land use configuration (e.g., shape and connectivity of land use types) may have an influence on hydrological processes. Clustered forests tend to have a more positive and pronounced impact on the water balance than scattered forest (Lin et al. 2007; Zhang et al. 2013; Li and Zhou 2015). For our study catchment, the forestation is spread in the upper and middle catchment area as it mainly occurs as private land (i.e., community forest) resulting in scattered forest area over the catchment. As a result, impacts of forestation on the water balance might be dampened. Bentley and Coomes (2020) argue that the historical land use prior to forestation plays an important role as well. Forestation in catchments that were previously fallow showing a more pronounced Q reduction than those that were reported as having been used for agriculture. In addition, effects of forestation to increase infiltration and ground water recharge rates are pronounced for forest establishment on degraded land (Bruijnzeel 2004; Ilstedt et al. 2007; Beck et al. 2013). For our study area, forestation has been established on agriculture area (i.e., dryland farming), which probably resulted in minor impacts on the water balance. It should be noted that dryland farming occupied a large area (77.2% in 2019), so that a small change (< 10%) of this land use type into forest area did not significantly change the overall catchment water balance. Moreover, agricultural management applied in dry land farming usually aims to preserve soil moisture which may offset the positive impacts of forest on soils (Bruijnzeel 2004). Another factor that potentially decelerate or accelerate the impacts of forestation on the water balance is tree characteristics. Sprenger et al. (2013) found that planting mixtures of pioneers and shade tolerant tree species may lead to moderate seepage rates compared to monocultures of either fast or slow growing tree species due to tree heights and canopy openness that are leveled out. In addition, Ellison et al. (2017) revealed that tree species and their root architecture are highly important for hydraulic redistribution of water in soils. Coniferous trees like Pinus merkusii that are dominant in the upper part of the study catchment may significantly increase ET rates. Thus, small deciduous trees species are in favor for future forestation to improve catchment yield since they can reduce interception losses (Hirsch et al. 2011). In addition, tree age is also an important factor controlling the water balance as young forests typically consume more water than old-growth forests (Delzon and Loustau 2005). Although promising results were obtained, several parts in this modeling study can be a source of uncertainty which may affect the results. The data and models used can be a source of uncertainty. For the data, it is a challenge to obtain long-term and reliable hydro-meteorological data for the study catchment. This is typically for South-East Asian countries including Indonesia where meteorological gauge networks generally include a limited number of stations which, commonly, are not well distributed over catchments (Douglas 1999), which was the case for our study as well. A corrected satellite-based data source used in this study has successfully overcome this limitation, but does not omit the biases (Ebert et al. 2007; Vila et al. 2009). As input for the SWAT model, this study used a soil map from FAO/IIASA/ISRIC/ISSCAS/JRC (2012) with a coarse spatial resolution. This global soil map was used because of limited information on soil characteristics in the local soil map available for the study catchment. Because of this soil generalization, several details important for hydrological processes for different soil conditions might be obscured. In addition, sources of uncertainty can be present due to model choice and structure (e.g., model assumptions, equations, parameterization). For instance, the SWAT model has been developed for temperate regions and the default SWAT database is possibly not applicable to the tropics (Marhaento et al. 2017b), which may affect the results. The selection of the equation to calculate ETo, where we used a temperature-based calculation (i.e., Hargreaves) due to limited climate data availability, may also contribute to uncertainty in the results. Finally, the equifinality problem during parameterization was the most challenging part in the model simulations. Although we got satisfactory simulated Q, we found a decrease in model performance between the calibration period and validation period. This decrease could be an indication of a significant contribution of climate change and/or land use change in Q alteration (Refsgaard et al. 1989; Lørup et al. 1998, and Marhaento et al. 2017b), but could also be an indication of errors in parameters. It should also be noted that satisfactory simulated Q do not guarantee a good model performance for other variables such as ET that is important in this type of study. Therefore, in future research we suggest to include ET in the calibration process as well (Rientjes et al. 2013). This study assessed the impacts of forestation activity and climate change on the annual and seasonal water balance of the Bogowonto catchment. Land use of the study catchment changed during the period 2006–2019, where the forest cover increased by 10.1% of the total area indicating a successful forestation program. In the same period, it was observed that there was an increase in the mean annual T, R and Q. Seasonally, the R pattern also changed with an increase in the wet season (DJF) and dry season (JJA), while in the transition periods from the wet to dry season and vice versa (i.e., MAM and SON) there was a decrease in R. Results based on the SWAT modelling approach showed that changes of the mean annual and seasonal water balance components under the forestation only scenario were relatively minor. Changes were more pronounced under the climate change only scenario, indicating changes in the mean annual and seasonal R and T may have large impacts on the water availability of the Bogowonto catchment. Based on the combined scenario, it was observed that the effects of the presence of a larger forest area on the water balance were relatively minor compared to climate change. Despite minor impacts, forestation activity has decreased the Q and Qs during the wet season (DJF) which may reduce the risk of floods. However, it also has serious drawbacks, with significantly reduced Q and Qb resulting in more severe drought events during the dry season. The datasets used and/or analyzed in this study are available from the corresponding author on request. Abbaspour KC, Rouholahnejad E, Vaghefi S, Srinivasan R, Yang H, Kløve B (2015) A continental-scale hydrology and water quality model for Europe: calibration and uncertainty of a high-resolution large-scale SWAT model. J Hydrol 524:733–752. https://doi.org/10.1016/j.jhydrol.2015.03.027 Arnold JG, Srinivasan R, Muttiah RS, Williams JR (1998) Large area hydrologic modeling and assessment part I: model development. J Am Water Res Assoc 34(1):73–89. https://doi.org/10.1111/j.1752-1688.1998.tb05961.x Beck HE, Bruijnzeel LA, Van Dijk AIJM, McVicar TR, Scatena FN, Schellekens J (2013) The impact of forest regeneration on streamflow in 12 mesoscale humid tropical catchments. Hydrol Earth Syst Sci 17(7):2613–2635. https://doi.org/10.5194/hess-17-2613-2013 Beck HE, Vergopolan N, Pan M, Levizzani V, Van Dijk AI, Weedon GP, Wood EF (2017) Global-scale evaluation of 22 precipitation datasets using gauge observations and hydrological modeling. Hydrol Earth Syst Sci 21(12):6201–6217. https://doi.org/10.5194/hess-21-6201-2017 Bentley L, Coomes DA (2020) Partial river flow recovery with forest age is rare in the decades following establishment. Glob Chang Biol 26(3):1458–1473. https://doi.org/10.1111/gcb.14954 Blöschl G, Ardoin-Bardin S, Bonell M, Dorninger M, Goodrich D, Gutknecht D, Matamoros D, Merz B, Shand P, Szolgay J (2007) At what scales do climate variability and land cover change impact on flooding and low flows? Hydrol Proc 21(9):1241–1247. https://doi.org/10.1002/hyp.6669 Bokhari SAA, Ahmad B, Ali J, Ahmad S, Mushtaq H, Rasul G (2018) Future climate change projections of the Kabul River basin using a multi-model ensemble of high-resolution statistically downscaled data. Earth Syst Environ 2(3):477–497. https://doi.org/10.1007/s41748-018-0061-y Bosch JM, Hewlett JD (1982) A review of catchment experiments to determine the effect of vegetation changes on water yield and evapotranspiration. J Hydrol 55(1):3–23. https://doi.org/10.1016/0022-1694(82)90117-2 Brown AE, Zhang L, McMahon TA, Western AW, Vertessy RA (2005) A review of paired catchment studies for determining changes in water yield resulting from alterations in vegetation. J Hydrol 310(1–4):28–61. https://doi.org/10.1016/j.jhydrol.2004.12.010 Bruijnzeel LA (1989) (De)forestation and dry-season flow in the tropics: a closer look. J Trop Forest Sci 1:229–243 Bruijnzeel LA (2004) Hydrological functions of tropical forests: not seeing the soil for the trees? Agric Ecosyst Environ 104(1):185–228. https://doi.org/10.1016/j.agee.2004.01.015 Budyko MI (1974) Climate and life. Academic Press, New York Calder IR (1998) Water resources and land use issues. SWIM paper 3. International Water Management Institute, Colombo Calder IR, Young D, Sheffield J (2001) Scoping study to indicate the direction and magnitude of the hydrological impacts resulting from land use change on the Panama Canal watershed. Centre for Land Use and Water Resources Research, Newcastle upon Tyne DeFries R, Eshleman KN (2004) Land-use change and hydrologic processes: a major focus for the future. Hydrol Proc 18(11):2183–2186. https://doi.org/10.1002/hyp.5584 Delzon S, Loustau D (2005) Age-related decline in stand water use: sap flow and transpiration in a pine forest chronosequence. Agric Forest Meteorol 129(3–4):105–119. https://doi.org/10.1016/j.agrformet.2005.01.002 Douglas I (1999) Hydrological investigations of forest disturbance and land cover impacts in South-East Asia: a review. Philos Trans R Soc B Biol Sci 354(1391):1725–1738. https://doi.org/10.1098/rstb.1999.0516 Ebert EE, Janowiak JE, Kidd C (2007) Comparison of near-real-time precipitation estimates from satellite observations and numerical models. Bull Am Meteorol Soc 88(1):47–64. https://doi.org/10.1175/BAMS-88-1-47 Ellison D, Morris CE, Locatelli B, Sheil D, Cohen J, Murdiyarso D, Gutierrez V, van Noordwijk M, Creed IF, Pokorny J, Gaveau D, Spracklen DV, Tobella AB, Ilstedt U, Teuling AJ, Gebrehiwot SG, Sands DC, Muys B, Verbist B, Springgay E, Sugandi Y, Sullivan CA (2017) Trees, forests and water: cool insights for a hot world. Global Environm Change 43:51–61. https://doi.org/10.1016/j.gloenvcha.2017.01.002 FAO (2001) Lecture notes on the major soils of the world (no. 94). In: Driessen P, Deckers J, Spaargaren O, Nachtergaele F (eds) World soil resources report. Food and Agriculture Organization (FAO), Rome FAO/IIASA/ISRIC/ISSCAS/JRC (2012) Harmonized World Soil Database (version 1.2). FAO, Rome and IIASA, Laxenburg Funk C, Peterson P, Landsfeld M, Pedreros D, Verdin J, Shukla S, Husak G, Rowland J, Harrison L, Hoell A, Michaelsen J (2015) The climate hazards infrared precipitation with stations—a new environmental record for monitoring extremes. Sci Data 2(1):1–21. https://doi.org/10.1038/sdata.2015.66 Gallo EL, Meixner T, Aoubid H, Lohse KA, Brooks PD (2015) Combined impact of catchment size, land cover, and precipitation on streamflow and total dissolved nitrogen: a global comparative analysis. Global Biogeochem Cyc 29(7):1109–1121. https://doi.org/10.1002/2015GB005154 Guevara-Escobar A, Gonzalez-Sosa E, Ramos-Salinas M, Hernandez-Delgado GD (2007) Experimental analysis of drainage and water storage of litter layers. Hydrol Earth Syst Sci 11(5):1703–1716. https://doi.org/10.5194/hess-11-1703-2007 Gupta HV, Kling H, Yilmaz KK, Martinez GF (2009) Decomposition of the mean squared error and NSE performance criteria: implications for improving hydrological modelling. J Hydrol 377(1–2):80–91. https://doi.org/10.1016/j.jhydrol.2009.08.003 Hejazi MI, Moglen GE (2008) The effect of climate and land use change on flow duration in the Maryland Piedmont region. Hydrol Proc 22(24):4710–4722. https://doi.org/10.1002/hyp.7080 Hirsch F, Clark D, Vihervaara P, Primmer E (2011) Payments for forest-related ecosystem services: what role for a green economy? UNECE/FAO Forestry and Timber Section, in cooperation with the Finnish Environment Institute (SYKE); UNECE Water Convention; FAO; United Nations University Institute for Water Environment and Health (UNU-INWEH), Switzerland Ibanez R, Condit R, Angehr G, Aguilar S, Garcia T, Martinez R, Sanjur A, Stallard R, Wright SJ, Rand AS, Heckadon S (2002) An ecosystem report on the Panama Canal: monitoring the status of the forest communities and the watershed. Environm Monit Assess 80(1):65–95. https://doi.org/10.1023/A:1020378926399 Ilstedt U, Malmer A, Verbeeten E, Murdiyarso D (2007) The effect of afforestation on water infiltration in the tropics: a systematic review and meta-analysis. Forest Ecol Manag 251(1):45–51. https://doi.org/10.1016/j.foreco.2007.06.014 Ilstedt U, Tobella AB, Bazié HR, Bayala J, Verbeeten E, Nyberg G, Sanou J, Benegas L, Murdiyarso D, Laudon H, Sheil D, Malmer A (2016) Intermediate tree cover can maximize groundwater recharge in the seasonally dry tropics. Sci Rep 6(1):1–12. https://doi.org/10.1038/srep21930 IPCC (2012) Summary for policymakers. In: Field CB, Barros V, Stocker TF, Qin D, Dokken DJ, Ebi KL, Mastrandrea MD, Mach KJ, Plattner G-K, Allen SK, Tignor M, Midgley PM (eds) Managing the risks of extreme events and disasters to advance climate change adaptation. A special report of working groups I and II of the intergovernmental panel on climate change. Cambridge University press, Cambridge and New York, pp 1–19 Katiraie-Boroujerdy PS, Rahnamay Naeini M, Akbari Asanjan A, Chavoshian A, Hsu KL, Sorooshian S (2020) Bias correction of satellite-based precipitation estimations using quantile mapping approach in different climate regions of Iran. Remote Sens 12(13):2102. https://doi.org/10.3390/rs12132102 Khoi DN, Suetsugi T (2014) The responses of hydrological processes and sediment yield to land-use and climate change in the Be River Catchment, Vietnam. Hydrol Proc 28(3):640–652. https://doi.org/10.1002/hyp.9620 Knoben WJ, Freer JE, Woods RA (2019) Inherent benchmark or not? Comparing Nash–Sutcliffe and Kling–Gupta efficiency scores. Hydrol Earth Syst Sci 23(10):4323–4331. https://doi.org/10.5194/hess-23-4323-2019 Le HD, Smith C, Herbohn J, Harrison S (2012) More than just trees: assessing reforestation success in tropical developing countries. J Rural Stud 28(1):5–19. https://doi.org/10.1016/j.jrurstud.2011.07.006 Legesse D, Vallet-Coulomb C, Gasse F (2003) Hydrological response of a catchment to climate and land use changes in tropical Africa: case study south Central Ethiopia. J Hydrol 275(1):67–85. https://doi.org/10.1016/S0022-1694(03)00019-2 Li J, Zhou ZX (2015) Coupled analysis on landscape pattern and hydrological processes in Yanhe watershed of China. Sci Total Environ 505:927–938. https://doi.org/10.1016/j.scitotenv.2014.10.068 Li Z, Liu WZ, Zhang XC, Zheng FL (2009) Impacts of land use change and climate variability on hydrology in an agricultural catchment on the Loess Plateau of China. J Hydrol 377(1–2):35–42. https://doi.org/10.1016/j.jhydrol.2009.08.007 Lin YP, Hong NM, Wu PJ, Wu CF, Verburg PH (2007) Impacts of land use change scenarios on hydrology and land use patterns in the Wu-Tu watershed in northern Taiwan. Landscape Urban Plan 80(1):111–126. https://doi.org/10.1016/j.landurbplan.2006.06.007 Liu Y, Zhang X, Xia D, You J, Rong Y, Bakir M (2011) Impacts of land-use and climate changes on hydrologic processes in the Qingyi River watershed, China. Journal of Hydrologic Engineering, 18(11), 1495–1512. Liu D (2020) A rational performance criterion for hydrological model. J Hydrol 590:125488. https://doi.org/10.1016/j.jhydrol.2020.125488 Lørup JK, Refsgaard JC, Mazvimavi D (1998) Assessing the effect of land use change on catchment runoff by combined use of statistical tests and hydrological modelling: case studies from Zimbabwe. J Hydrol 205(3):147–163. https://doi.org/10.1016/S0168-1176(97)00311-9 Lyu L, Wang X, Sun C, Ren T, Zheng D (2019) Quantifying the effect of land use change and climate variability on green water resources in the Xihe River basin, Northeast China. Sustainability 11(2):338. https://doi.org/10.3390/su11020338 Marhaento H, Booij MJ, Ahmed N (2021) Quantifying relative contribution of land use change and climate change to streamflow alteration in the Bengawan Solo River, Indonesia. Hydrol Sci J 66(6):1059–1068. https://doi.org/10.1080/02626667.2021.1921182 Marhaento H, Booij MJ, Hoekstra AY (2017a) Attribution of changes in streamflow to land use change and climate change in a mesoscale tropical catchment in Java, Indonesia. Hydrol Res 48(4):1143–1155. https://doi.org/10.2166/nh.2016.110 Marhaento H, Booij MJ, Hoekstra AY (2018) Hydrological response to future land-use change and climate change in a tropical catchment. Hydrol Sci J 63(9):1368–1385. https://doi.org/10.1080/02626667.2018.1511054 Marhaento H, Booij MJ, Rientjes THM, Hoekstra AY (2017b) Attribution of changes in the water balance of a tropical catchment to land use change using the SWAT model. Hydrol Proc 31(11):2029–2040. https://doi.org/10.1002/hyp.11167 Marhaento H, Booij MJ, Rientjes THM, Hoekstra AY (2019) Sensitivity of streamflow characteristics to different spatial land-use configurations in tropical catchment. J Water Res Plan Manag 145(12):04019054. https://doi.org/10.1061/(ASCE)WR.1943-5452.0001122 Milly PCD, Dunne KA, Vecchia AV (2005) Global pattern of trends in streamflow and water availability in a changing climate. Nature 438(7066):347–350. https://doi.org/10.1038/nature04312 Nauman S, Zulkafli Z, Bin Ghazali AH, Yusuf B (2019) Impact assessment of future climate change on streamflows upstream of Khanpur dam, Pakistan using soil and water assessment tool. Water 11(5):1090. https://doi.org/10.3390/w11051090 Neitsch SL, Arnold JG, Kiniry JR, Williams JR (2011) Soil and water assessment tool theoretical documentation version 2009. Texas Water Resources Institute, United States of America Nobre CA, Sampaio G, Borma LS, Castilla-Rubio JC, Silva JS, Cardoso M (2016) Land-use and climate change risks in the Amazon and the need of a novel sustainable development paradigm. Proc Nat Acad Sci 113(39):10759–10768. https://doi.org/10.1073/pnas.1605516113 Ogden FL, Crouch TD, Stallard RF, Hall JS (2013) Effect of land cover and use on dry season river runoff, runoff efficiency, and peak storm runoff in the seasonal tropics of Central Panama. Water Res Res 49(12):8443–8462. https://doi.org/10.1002/2013WR013956 Paredes-Trejo FJ, Barbosa HA, Kumar TL (2017) Validating CHIRPS-based satellite precipitation estimates in Northeast Brazil. J Arid Environ 139:26–40 Pielke RA (2005) Land use and climate change. Science 310(5754):1625–1626. https://doi.org/10.1126/science.1120529 Pool S, Vis M, Seibert J (2018) Evaluating model performance: towards a non-parametric variant of the Kling-Gupta efficiency. Hydrol Sci J 63(13–14):1941–1953. https://doi.org/10.1080/02626667.2018.1552002 Refsgaard JC, Alley WM, Vuglinsky VS (1989) Methods for distinguishing between Man's influence and climatic effects on the hydrological cycle. IHP-III project 6.3. Unesco, Paris Remondi F, Burlando P, Vollmer D (2016) Exploring the hydrological impact of increasing urbanisation on a tropical river catchment of the metropolitan Jakarta, Indonesia. Sust Citie Soc 20:210–221. https://doi.org/10.1016/j.scs.2015.10.001 Rientjes THM, Haile AT, Kebede E, Mannaerts CMM, Habib E, Steenhuis TS (2011) Changes in land cover, rainfall and stream flow in upper gilgel abbay catchment, Blue Nile basin-Ethiopia. Hydrol Earth Syst Sci 15(6):1979–1989. https://doi.org/10.5194/hess-15-1979-2011 Rientjes THM, Muthuwatta LP, Bos MG, Booij MJ, Bhatti HA (2013) Multi-variable calibration of a semi-distributed hydrological model using streamflow data and satellite-based evapotranspiration. J Hydrol 505:276–290. https://doi.org/10.1016/j.jhydrol.2013.10.006 Romanowicz RJ, Booij MJ (2011) Impact of land use and water management on hydrological processes under varying climatic conditions. Physics Chem Earth Preface 36(13):613–614. https://doi.org/10.1016/j.pce.2011.08.009 Sala OE, Chapin FS, Armesto JJ, Berlow E, Bloomfield J, Dirzo R, Huber-Sanwald E, Huenneke LF, Jackson RB, Kinzig A, Leemans R, Lodge DM, Mooney HA, Oesterheld M, Poff NL, Sykes MT, Walker BH, Walker M, Wall DH (2000) Global biodiversity scenarios for the year 2100. Sci Mag 287(5459):1770–1774. https://doi.org/10.1126/science.287.5459.1770 Saxton K, Willey P (2005) The SPAW Model for Agricultural Field and Pond Hydrologic Simulation. In V. Singh & D. Frevert, Mathematical Modeling of Watershed Hydrology (Chapter 17, pp. 1-37). CRC Press LLC. Retrieved on 19 January 2021 from https://hrsl.ba.ars.usda.gov/SPAW/SPAW%20Book%20Chapter.pdf Sen PK (1968) Estimates of the regression coefficient based on Kendall's tau. J Am Stat Assoc 63(324):1379–1389. https://doi.org/10.1080/01621459.1968.10480934 Shi P, Ma X, Hou Y, Li Q, Zhang Z, Qu S, Chen C, Cai T, Fang X (2013) Effects of land-use and climate change on hydrological processes in the upstream of Huai river, China. Water Res Manag 27(5):1263–1278. https://doi.org/10.1007/s11269-012-0237-4 Song Y, Zhang J, Meng X, Zhou Y, Lai Y, Cao Y (2020) Comparison study of multiple precipitation forcing data on hydrological modeling and projection in the Qujiang river basin. Water 12(9):2626. https://doi.org/10.3390/w12092626 Sprenger M, Oelmann Y, Weihermüller L, Wolf S, Wilcke W, Potvin C (2013) Tree species and diversity effects on soil water seepage in a tropical plantation. Forest Ecol Manag 309:76–86. https://doi.org/10.1016/j.foreco.2013.03.022 Suryatmojo H, Masamitsu F, Kosugi K, Mizuyama T (2011) Impact of selective logging and intensive line planting system on runoff and soil erosion in a tropical Indonesia rainforest. Proceed River Basin Manag VI:288–300 Tarigan S, Wiegand K, Slamet B (2018) Minimum forest cover required for sustainable water flow regulation of a watershed: a case study in Jambi Province, Indonesia. Hydrol Earth Syst Sci 22(1):581–594. https://doi.org/10.5194/hess-22-581-2018 Thrasher B, Maurer EP, McKellar C, Duffy PB (2012) Bias correcting climate model simulated daily temperature extremes with quantile mapping. Hydrol Earth Syst Sci 16(9):3309–3314. https://doi.org/10.5194/hess-16-3309-2012 Tuo Y, Duan Z, Disse M, Chiogna G (2016) Evaluation of precipitation input for SWAT modeling in Alpine catchment: a case study in the Adige river basin (Italy). Sci Total Environ 573:66–82. https://doi.org/10.1016/j.scitotenv.2016.08.034 Valentin C, Agus F, Alamban R, Boosaner A, Bricquet JP, Chaplot VT, de Guzman A, de Rouw JL, Janeau D, Orange K, Phachomphonh DD, Phai P, Podwojewski O, Ribolzi N, Silvera K, Subagyono JP, Thiébaux T, Duc Toan TV (2008) Runoff and sediment losses from 27 upland catchments in Southeast Asia: impact of rapid land use changes and conservation practices. Agric Ecosyst Environ 128(4):225–238. https://doi.org/10.1016/j.agee.2008.06.004 Vila DA, De Goncalves LGG, Toll DL, Rozante JR (2009) Statistical evaluation of combined daily gauge observations and rainfall satellite estimates over continental South America. J Hydrometeorol 10(2):533–543. https://doi.org/10.1175/2008JHM1048.1 Williams JR (1995) Chapter 25: the EPIC model. In: Singh VP (ed) Computer models of watershed hydrology. Water Resources Publications, Highland Ranch, pp 909–1000 Wohl E, Barros A, Brunsell N, Chappell NA, Coe M, Giambelluca T, Goldsmith S, Harmon R, Hendrickx JMH, Juvik J, McDonnell J, Ogden F (2012) The hydrology of the humid tropics. Nat Clim Chang 2(9):655–662. https://doi.org/10.1038/nclimate1556 Zhang G, Guhathakurta S, Dai G, Wu L, Yan L (2013) The control of land-use patterns for stormwater management at multiple spatial scales. Environm Manag 51(3):555–570. https://doi.org/10.1007/s00267-012-0004-6 Zhang L, Nan Z, Xu Y, Li S (2016) Hydrological impacts of land use change and climate variability in the headwater region of the Heihe River basin, Northwest China. PLoS One 11(6):e0158394. https://doi.org/10.1371/journal.pone.0158394 Zhang Y, Guan D, Jin C, Wang A, Wu J, Yuan F (2014) Impacts of climate change and land use change on runoff of forest catchment in Northeast China. Hydrol Proc 28(2):186–196. https://doi.org/10.1002/hyp.9564 The authors would like to thank Serayu Opak River Basin Organization and the Directorate General of Forestry and Environmental Planning, Ministry of Forestry, for providing the hydro- climatological and land use data. The first author thanks Hayun Nasta who helped during data analysis. The research has partly been funded by the publication grant scheme from the Publishers and Publications Board (BPP), Universitas Gadjah Mada, Indonesia. Faculty of Forestry, Universitas Gadjah Mada, Yogyakarta, 55281, Indonesia Hero Marhaento Water Engineering and Management Group, Faculty of Engineering Technology, University of Twente, P.O. Box 217, 7500 AE, Enschede, the Netherlands Martijn J. Booij Faculty of Geography, Universitas Gadjah Mada, Yogyakarta, 55281, Indonesia Noorhadi Rahardjo Key Laboratory of Mountain Surface Process and Ecological Regulations, Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, Chengdu, 610041, China Naveed Ahmed HM designed research, collected and analyzed data, and wrote the initial manuscript; MJB and NA analyzed hydrological data; NR analyzed land use change; all authors discussed the results and revised the manuscript. The authors read and approved the final manuscript. Correspondence to Hero Marhaento. Marhaento, H., Booij, M.J., Rahardjo, N. et al. Impacts of forestation on the annual and seasonal water balance of a tropical catchment under climate change. For. Ecosyst. 8, 64 (2021). https://doi.org/10.1186/s40663-021-00345-5 Land use change SWAT model Bogowonto catchment
CommonCrawl
Limitations of the incidence density ratio as approximation of the hazard ratio Ralf Bender ORCID: orcid.org/0000-0002-2422-43621,2 & Lars Beckmann1 Incidence density ratios (IDRs) are frequently used to account for varying follow-up times when comparing the risks of adverse events in two treatment groups. The validity of the IDR as approximation of the hazard ratio (HR) is unknown in the situation of differential average follow up by treatment group and non-constant hazard functions. Thus, the use of the IDR when individual patient data are not available might be questionable. A simulation study was performed using various survival-time distributions with increasing and decreasing hazard functions and various situations of differential follow up by treatment group. HRs and IDRs were estimated from the simulated survival times and compared with the true HR. A rule of thumb was derived to decide in which data situations the IDR can be used as approximation of the HR. The results show that the validity of the IDR depends on the survival-time distribution, the difference between the average follow-up durations, the baseline risk, and the sample size. For non-constant hazard functions, the IDR is only an adequate approximation of the HR if the average follow-up durations of the groups are equal and the baseline risk is not larger than 25%. In the case of large differences in the average follow-up durations between the groups and non-constant hazard functions, the IDR represents no valid approximation of the HR. The proposed rule of thumb allows the use of the IDR as approximation of the HR in specific data situations, when it is not possible to estimate the HR by means of adequate survival-time methods because the required individual patient data are not available. However, in general, adequate survival-time methods should be used to analyze adverse events rather than the simple IDR. Adverse events play an important role in the assessment of medical interventions. Simple standard methods for contingency tables are frequently applied for the analysis of adverse events. However, the application of simple, standard methods may be misleading if observations are censored at the time of discontinuation due to, for example, treatment switching or noncompliance, resulting in varying follow-up times, which sometimes differ remarkably between treatment groups [1]. Incidence densities (IDs), i.e., events per patient years, are frequently used to account for varying follow-up times when quantifying the risk of adverse events [2,3,4]. IDs are also called exposure-adjusted incidence rates (EAIRs) to underline that varying follow-up times are taken into account [2,3,4,5]. For comparisons between groups, incidence density ratios (IDRs) are used together with confidence intervals (CIs) based upon the assumption that the corresponding time-to-event variables follow an exponential distribution. The corresponding results are interpreted in the same way as hazard ratios (HRs). An example is given by the benefit assessment of the Institute for Quality and Efficiency in Health Care (IQWiG) in which the added benefit of abiraterone acetate (abiraterone for short) in comparison with watchful waiting was investigated in men with metastatic prostate cancer that is not susceptible to hormone-blocking therapy, who have no symptoms or only mild ones, and in whom chemotherapy is not yet indicated [6]. In this report the IDR was used to compare the risks of cardiac failure in the abiraterone group and the control group of the corresponding approval study. The result was IDR = 4.20, 95% CI 0.94, 18.76; P = 0.060. It is questionable whether the use of the IDR is adequate in this data situation because the median follow-up duration was 14.8 months in the abiraterone group but only 9.3 months in the control group. The reason for this large difference was the discontinuation of treatment after disease progression with stopping of the monitoring of adverse events 30 days later. In the situation of constant hazard functions, i.e., if the time-to-event data follow an exponential distribution, the IDR accounts for the differential follow up by treatment group. However, if the hazard functions are not constant, the effect of differential follow up by treatment group on the behavior of the IDR is unknown. Appropriate methods should be used for analysis of survival data if access to the individual patient data is available. However, access to the individual patient data is not available in the assessment of dossiers or publications with aggregate-level data. In this situation, a decision has to be made on the situations in which the IDR can or cannot be used as adequate approximation for the HR. The use of IDs makes sense in the situation of constant hazard functions in both groups [2, 3, 5, 7]. However, time-to-event data rarely follow an exponential distribution in medical research [3, 7]. In the case of low event risks, deviations from the exponential distribution may be negligible if the average follow up is comparable in both groups [2]. However, in the case of differential follow up by treatment group, deviations from the exponential distribution may have a considerable effect on the validity of the IDR and the corresponding CIs as an approximation of the HR. Kunz et al. [8] investigated bias and coverage probability (CP) of point and interval estimates of IDR in meta-analyses and in a single study with differential follow up by treatment group when incorrectly assuming that average follow up is equal in the two groups. It was shown that bias and CP worsen rapidly with increasing difference in the average follow-up durations between the groups [8]. Here, we do not consider the effect of incorrectly assuming equal average follow-up durations. IDR is calculated correctly by using the different follow-up durations in the groups. The focus here is the effect of deviations from the exponential distribution of the time-to-event data. In this paper, the validity of the IDR as approximation of the HR is investigated in the situation of differential average follow up by treatment group by means of a simulation study considering decreasing and increasing hazard functions. A rule of thumb is derived to decide in which data situations the IDR can be used as approximation of the HR. We illustrate the application of the rule by using a real data example. Data generation We considered the situation of a randomized controlled trial (RCT) with two parallel groups of equal sample size n in each group. We generated data for a time-to-event variable T (time to an absorbing event or time to first event) with a non-constant hazard function according to Bender et al. [9]. The Weibull distribution is used to generate data with decreasing and the Gompertz distribution is used to generate data with increasing hazard functions. The survival functions S0(t)weib and S0(t)gomp of the control group using the Weibull and the Gompertz distribution, respectively, are defined by: $$ {S}_0{\left(\mathrm{t}\right)}_{weib}=\mathit{\exp}\left(-\uplambda {\mathrm{t}}^{\nu}\right) $$ $$ {S}_0{\left(\mathrm{t}\right)}_{gomp}=\mathit{\exp}\left(\frac{\uplambda}{\upalpha}\left(1-\mathit{\exp}\Big(\upalpha \mathrm{t}\right)\right), $$ where λ > 0 is the scale parameter and ν > 0, α ∈ (−∞,∞) are the shape parameters of the survival time distributions. The corresponding hazard functions of the control group are given by: $$ {h}_0{\left(\mathrm{t}\right)}_{weib}=\lambda \kern0.5em v\kern0.5em {\mathrm{t}}^{\mathrm{v}-1} $$ $$ {h}_0{\left(\mathrm{t}\right)}_{gomp}=\lambda \kern0.5em \mathit{\exp}\left(\upalpha \mathrm{t}\right), $$ leading to a decreasing hazard function for ν < 1 (Weibull), and an increasing hazard function for α > 0 (Gompertz). We simulated data situations with identical and with different average follow-up durations in the control and intervention group. The average follow-up duration in the control group relative to the intervention group varied from 100% to 30% (in steps of 10%, i.e., 8 scenarios). To simulate a variety of study situations, we chose 9 different baseline risks (BLRs) (BLR = 0.01, 0.02, 0.05, 0.075, 0.1, 0.15, 0.2, 0.25, and 0.3), 7 different effect sizes (HR = 0.4, 0.7, 0.9, 1, 1.11, 1.43, and 2.5), and 3 different sample sizes (N = 200, 500, and 1000, with 1:1 randomization). The BLR is the absolute risk of an event in the control group over the actual follow-up period in the control group. The parameters of the survival-time distributions were chosen so that the specified baseline risks and effect sizes are valid for the corresponding follow-up duration in the control group and the HR for the comparison treatment versus control, respectively. We considered 1 situation with decreasing hazard function (Weibull distribution with shape parameter ν = 0.75) and 3 different situations with increasing hazard function (Gompertz distribution with shape parameter α = 0.5, 0.75, 1) because the case of increasing hazard was expected to be the more problematic one. The corresponding scale parameters λ for both the Weibull and the Gompertz distribution varied depending on the baseline risk and the follow-up duration in the control group. First results showed that in some situations with relative average follow-up durations in the control group of 80%, 90%, and 100%, the IDR has adequate properties for all baseline risks considered. Therefore, additional simulations were performed in these cases with larger baseline risks (0.5, 0.7, 0.9, 0.95, and 0.99). In total, the combination of 4 survival distributions with 8 or 3 relative follow-up durations, 9 or 5 baseline risks, 7 effect sizes, and 3 sample sizes resulted in (4 × 8 × 9 × 7 × 3) + (4 × 3 × 5 × 7 × 3) = 7308 different data situations. We included only simulation runs in which at least 1 event occurred in both groups and the estimation algorithm of the Cox proportional hazard model converged. If at least one of these conditions was violated a new simulation run was started, so that for each of the 7308 data situations 1000 simulation runs were available. This procedure leads to a bias in situations in which simulation runs frequently had to be repeated (very low baseline risk, low sample size). However, this problem concerns both IDR and HR and it was not the goal of the study to evaluate the absolute bias of the estimators. The IDR was calculated from the simulated time-to-event data by: $$ \mathrm{IDR}=\frac{{\mathrm{e}}_1/{\sum}_{\mathrm{j}=1}^{\mathrm{n}}{\mathrm{t}}_{1\mathrm{j}}}{{\mathrm{e}}_0/{\sum}_{\mathrm{j}=1}^{\mathrm{n}}{\mathrm{t}}_{0\mathrm{j}}}=\frac{{\mathrm{e}}_1{\sum}_{\mathrm{j}=1}^{\mathrm{n}}{\mathrm{t}}_{0\mathrm{j}}}{{\mathrm{e}}_0{\sum}_{\mathrm{j}=1}^{\mathrm{n}}{\mathrm{t}}_{1\mathrm{j}}}, $$ where ei represents the number of events in the control (i = 0) and the intervention group (i = 1), respectively, and tij represents the time to event or to study ending in patient j (j = 1, …,n) in group i (i = 0,1). A 95% CI for IDR based on the assumption of a constant hazard function was obtained according to Deeks et al. [10] by: $$ \mathrm{IDR}\pm \exp \Big({\mathrm{z}}_{0.975}\times \mathrm{SE}\left(\log \left(\mathrm{IDR}\right)\right), $$ where z0.975 = Φ−1(0.975) and Φ denotes the cumulative density function of the standard normal distribution. The standard error (SE) of log (IDR) is given by: $$ \mathrm{SE}\left(\log \left(\mathrm{IDR}\right)\right)=\sqrt{\frac{1}{e_1}+\frac{1}{e_0}}. $$ The Cox proportional hazards model was used for point and interval estimation of the HR. All analyses were performed using the R statistical package [11]. To assess the adequacy of the IDR as approximation of the HR in the situation of non-constant hazard functions we calculated the coverage probability (CP) of the 95% CIs and the mean square error (MSE) and the SE of the point estimates log (IDR) and log (HR). For effect sizes not equal to 1 (i.e., true HR ≠ 1), additionally the relative bias was calculated. The relative bias is given by the mean percent error (MPE) defined by: $$ \mathrm{MPE}=100\frac{1}{\mathrm{s}}\sum \limits_{\mathrm{j}=1}^{\mathrm{s}}\frac{\theta_{\mathrm{j}}-{\theta}_{\mathrm{true}}}{\theta_{\mathrm{true}}}, $$ where s is the number of simulation runs (s = 1000), θj is the estimate of the considered parameter in simulation j, and θtrue is the true value of the considered parameter. The true HR was used as the true value for the HR estimation and for the IDR estimation because the goal of the study was to evaluate the adequacy of the IDR as approximation of the HR. Moreover, in the case of non-constant hazard functions the IDR can be calculated by means of formula (5). However, there is no clear theoretical parameter available that is estimated by the empirical IDR. The primary performance measure is given by the CP, which should be close to the nominal level of 95%. To identify data situations in which the IDR can be used as adequate approximation of the HR we used the criterion that the CP of the 95% CI should be at least 90%. A rule of thumb was developed depending on the relative average follow-up duration in the control group and the baseline risk, to decide whether or not the IDR can be used as a meaningful approximation of the HR. Simulation study In the situations considered in the simulation study it is not problematic to use the IDR as approximation of the HR if the average follow-up durations in both groups are equal and the BLR is not larger than 25%. The minimum CP of the interval estimation of the IDR is 92,5% (CP for HR 93,4%) for the Weibull and 91,2% (CP for HR 93,1%) for the Gompertz distribution. There were no relevant differences between the IDR and HR estimations in bias or MSE (results not shown). This means that even in the case of non-constant hazard functions but a constant HR, the IDR - independent of the effect size and the sample size - can be used as approximation to the HR if the average follow-up durations in both groups are equal and the BLR is not larger than 25%. The situation is different in the case of unequal average follow-up durations in the two groups, which is the more important case in practice. In this situation, there are shortfalls in the CP and in part large relative bias values for the IDR. The CP decreases remarkably under the nominal level of 95% with increasing difference in the average follow-up durations between the groups. The CP improves with decreasing sample size, due to the decreasing precision. Therefore, the sample size of N = 1000 is the relevant situation for the derivation of general rules. Figure 1 shows exemplarily the CP results for IDR and HR dependent on the BLR and the relative average follow-up duration in the control group, for the Gompertz distribution with shape parameter α = 1, sample size N = 1000, and a true HR of 0.4. We see that the CP for the IDR decreases remarkably under the nominal level of 95% with increasing difference in the average follow-up durations between the groups and with increasing BLR, whereas the CP for the HR lies within the desired area in all situations. Coverage probability (CP) by baseline risk for the Gompertz distribution with shape parameter α = 1, sample size N = 1000, relative average follow-up duration in the control group from 30% to 100%, and a true hazard ratio (HR) of 0.4. The shaded area is the range of the CP for the HR over all these 72 scenarios; solid lines represent the CP for the incidence density ratio (IDR) for the different relative average follow-up duration in the control group; the horizontal dashed line marks the desired CP of 0.95 The results for the Gompertz distribution, with shape parameter α = 1, sample size N = 1000, and a relative average follow-up duration in the control group of 90%, are presented In Table 1 as an example. We can see in Table 1 that the CP of the 95% CIs of the IDR is larger than 90% if BLR is ≤ 10%, but is below 90% if BLR is ≥ 15%, which means that IDR is an adequate approximation of the HR in the corresponding data situation if BLR is ≤ 10%. However, even in these cases a strong, relative bias in the IDR occurs with absolute MPE values partially above 100% (overestimation for the Weibull and underestimation for the Gompertz distribution). This can be accepted in practice for the following reason. The MPE is given in the log scale. A relative bias of MPE = 100% means that a true HR = 0.9 is estimated by IDR = 0.81. Such a bias seems to be acceptable if the corresponding CI has a CP of at least 90%. Table 1 Results for the Gompertz distribution Thresholds for BLR were derived for all other data situations. In total, 4 × 3 × 8 = 96 tables were produced for the 4 survival-time distributions, 3 sample sizes, and 8 relative average follow-up durations considered in the control group. The results are summarized in Table 2. Whether the IDR can be considered as adequate approximation of the HR depends not only on the BLR and the difference in the average follow-up durations between the groups but also, e.g., on the true survival-time distribution, which is unknown in practice. However, to derive general rules for the identification of situations in which the IDR can be used as approximation for the HR, the consideration of the BLR in dependence on the relative average follow-up duration in the control group seems to be sufficiently accurate. From Table 2, the following pragmatic rules can be derived: The IDR can be used in the case of equal follow-up durations in the two groups if BLR is ≤ 25% The IDR can be used in the case of a relative average follow-up duration in the control group between 90% and 100% if BLR is ≤ 10% The IDR can be used in the case of a relative average follow-up duration in the control group between 50% and 90% if BLR is ≤ 1% The IDR should not be used in the case of relative average follow-up durations < 50% in the control group Table 2 Maximum BLR for which CP of at least 90% is reached for interval estimation of IDR as approximation of the HR Other improved rules can be derived in certain situations if there is knowledge about the true survival-time distribution. However, this requires new simulations with the specific survival-time distribution. Without knowledge about the true survival-time distribution, the rule of thumb presented above can be used for practical applications when there is no access to the individual patient data. For illustration we consider the IQWiG dossier assessment, in which the added benefit of enzalutamide in comparison with watchful waiting was investigated in men with metastatic prostate cancer that is not susceptible to hormone-blocking therapy, who have no or only mild symptoms, and in whom chemotherapy is not yet indicated [12]. According to the overall assessment, enzalutamide can prolong overall survival and delay the occurrence of disease complications. The extent of added benefit is dependent on age [12]. The benefit assessment was based upon an RCT, which was the approval study for enzalutamide in the indication described above. In this study, patients were randomized to either enzalutamide (intervention group) or placebo (control group), while the hormone-blocking therapy was continued in all patients. In each group, treatment was continued until either disease progression or safety concerns arose. Due to differential treatment discontinuation by treatment group, the median follow-up duration for safety endpoints was threefold longer in the intervention group (17,1 months) compared to the control group (5,4 months). Here, we consider the endpoint hot flashes, which played a minor role in the overall conclusion of the benefit assessment. However, for the present study this endpoint is relevant, because interesting results are available for three different analyses. In the corresponding dossier submitted by the company, effect estimates with 95% CIs and P values were presented in the form of risk ratios (RRs) based upon naive proportions, as IDRs and as HRs. Additionally, Kaplan-Meier curves were presented. In each of the analyses only the first observed event of a patient was counted, i.e., there are no problems due to neglect of within-subject correlation. The following results were presented in the dossier for the endpoint "at least one hot flash". In the intervention group 174 (20.0%) among n1 = 871 patients experienced one or more events compared to 67 (7.9%) among n0 = 844 patients, which leads to an estimated RR = 2.52 with 95% CI 1.93, 3.28; P < 0.0001. However, as correctly argued by the company, this statistically significant effect could be induced simply by the threefold longer median follow-up duration in the control group. To account for the differential follow-up duration by treatment group, events per 100 patient years were presented (14.7 in the intervention group and 12.4 in the control group) leading to the not statistically significant result of IDR = 1,19 with 95% CI 0.87, 1.63; P = 0.28. However, according to our pragmatic rules, the IDR should not be used if the relative average follow-up duration in the control group is below 50%, which is the case here. Therefore, the validity of the IDR results is questionable in this example. Fortunately, the results of the Cox proportional hazards model were also presented. The result was statistically significant with an estimated HR = 2.29, 95% CI 1.73, 3.05; P < 0.0001. It should be noted that censoring is possibly not independent of outcome, leading to high risk of bias. Nevertheless, the results of the Cox proportional hazards model are interpretable and were accepted in the dossier assessment with the conclusion of a considerable harm of enzalutamide for the endpoint hot flashes [12]. This example shows that the use of IDR is invalid in the present case of differential follow-up duration by treatment group and non-constant hazard functions. From the Kaplan-Meier curves presented in the dossier it can be concluded that the hazard function of the endpoint hot flashes is decreasing. This situation can be illustrated as follows. In Fig. 2 we consider the situation of decreasing hazard with true HR = 2, i.e., the hazard in the intervention group is larger compared to the control group. The relative average follow-up duration in the control group is only 33% compared to the intervention group. If the hazard is estimated simply by means of events per person year, it is implicitly assumed that the hazards are constant. In fact, however, the average hazard in each group is estimated by means of the ID for the available follow-up duration. As the follow-up duration in the control group is much shorter, the right part of the true hazard function is not observed, which leads to a strong bias of the ID as estimate of the average hazard in the control group. Therefore, the IDR is also biased as an estimate of the HR. In this example with decreasing hazards and a large difference in the follow-up durations between the treatment groups, the harmful effect of enzalutamide on the endpoint hot flashes in comparison with watchful waiting could not be detected by means of the IDR. Therefore, the IDR is invalid here and should not be used to describe the effect of the intervention. Effect of a shorter follow-up duration in the control group on the incidence density ratio (IDR). ID1(t1) is the estimated average hazard in the intervention group up to t1 (black solid line), ID0(t0) is the estimated average hazard in the control group up to t0 (gray solid line); ID0(t1) is the estimated average hazard in the control group up to t1 (gray dashed line), which is not observed; the use of ID1(t1) and ID0(t0) leads to a biased estimate of the hazard ratio (HR) The IDR represents a valid estimator of the HR if the true hazard function is constant. However, for non-constant hazard functions we found that in the simulated data situations with decreasing and increasing hazard functions, the IDR is only an adequate approximation of the HR if the average follow-up durations in the groups are equal and the baseline risk is not larger than 25%. In the case of differential follow up by treatment group, the validity of the IDR depends on the true survival-time distribution, the difference between the average follow-up durations, the baseline risk, and the sample size. As a rule of thumb, the IDR can be used as approximation of the HR if the relative average follow-up duration in the control group is between 90% and 100% and BLR is ≤ 10, and in the situation where the average follow-up duration in the control group is between 50% and 90% and BLR is ≤ 1%. The IDR should not be used for relative average follow-up durations in the control group below 50%, because in general the IDR represents no valid approximation of the HR and the meaning of the IDR is unclear. The usefulness of this rule of thumb was illustrated by means of a real data example. The results and the conclusions of our simulation study are limited in the first instance to the data situations considered. We considered a wide range of effect sizes (HR 0.4–2.5), three total sample sizes (N = 200, 500, 1000) with balanced design, and four survival-time distributions with deceasing (Weibull distribution) and increasing hazard functions (Gompertz distribution). For the baseline risk, we considered almost the complete range (0.01–0.99) in the simulations. We derived practical rules to decide in which data situations the IDR can be used as approximation of the HR. These rules should also be approximately valid for other data situations. If detailed knowledge of the underlying survival-time distribution is available, more simulations can be performed to find improved rules for the specific data situation. We have not investigated the amount of bias associated with different patterns of dependent censoring. In this context, the framework of estimands offers additional possibilities to deal with competing events, leading to censoring mechanisms that are not independent of the considered time-to-event endpoint [13]. We have also not considered the data situations with recurrent events. Extensions of the Cox proportional hazards model, such as the Andersen-Gill, the Prentice-Williams-Peterson, the Wei-Lin-Weissfeld, and frailty models [14, 15] have been developed for analysis of recurrent event data. The application of methods for analysis of recurrent event data to analysis of adverse events in RCTs is discussed by Hengelbrock et al. [16]. Further research is required for the investigation of the impact of dependent censoring and multiple events on the validity of the IDR. In summary, in the case of large differences in the average follow-up durations between groups, the IDR represents no valid approximation of the HR if the true hazard functions are not constant. As constant hazard functions are rarely justified in practice, adequate survival-time methods accounting for different follow-up times should be used to analyze adverse events rather than the simple IDR, including methods for competing risks [17]. However, the proposed rule of thumb allows the application of IDR as approximation of the HR in specific data situations, when it is not possible to estimate the HR by means of adequate survival-time methods because the required individual patient data are not available. All results from the simulated data are available from the authors on reasonable request. The data presented in the examples are available online [6, 12]. BLR: Baseline risk Coverage probability EAIR: Exposure-adjusted incidence rate HR: Hazard ratio Incidence density IDR: Incidence density ratio IQWiG: Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen MPE: Mean percent error MSE: Mean square error Risk ratio SE: Bender R, Beckmann L, Lange S. Biometrical issues in the analysis of adverse events within the benefit assessment of drugs. Pharm Stat. 2016;15(4):292–6. https://doi.org/10.1002/pst.1740. Liu GF, Wang J, Liu K, Snavely DB. Confidence intervals for an exposure adjusted incidence rate difference with applications to clinical trials. Stat Med. 2006;25(8):1275–86. https://doi.org/10.1002/sim.2335. Siddiqui O. Statistical methods to analyze adverse events data of randomized clinical trials. J Biopharm Stat. 2009;19(5):889–99. https://doi.org/10.1080/10543400903105463. Stein AS, Larson RA, Schuh AC, Stevenson W, Lech-Maranda E, Tran Q, Zimmerman Z, Kormany W, Topp MS. Exposure-adjusted adverse events comparing blinatumomab with chemotherapy in advanced acute lymphoblastic leukemia. Blood Adv. 2018;2(13):1522–31. https://doi.org/10.1182/bloodadvances.2018019034. Zink RC, Marchenko O, Sanchez-Kam M, Ma H, Jiang Q. Sources of safety data and statistical strategies for design and analysis: clinical trials. Ther Innov Regul Sci. 2018;52(2):141–58. https://doi.org/10.1177/2168479017738980. IQWiG. Abirateronacetat (neues Anwendungsgebiet) – Nutzenbewertung gemäß § 35a SGB V, Auftrag A13–06, Version 1.0 vom 11.04.2013. https://www.iqwig.de/download/A13-06_Abirateronacetat_neues_Anwendungsgebiet_Nutzenbewertung_35a_SGB_V.pdf. Accessed 27 June 2019. Kraemer HC. Events per person-time (incidence rate): a misleading statistic? Stat Med. 2009;28(6):1028–39. https://doi.org/10.1002/sim.3525. Kunz LM, Normand SL, Sedrakyan A. Meta-analysis of rate ratios with differential follow-up by treatment arm: inferring comparative effectiveness of medical devices. Stat Med. 2015;34(21):2913–25. https://doi.org/10.1002/sim.6530. Bender R, Augustin T, Blettner M. Generating survival times to simulate Cox proportional hazards models. Stat Med. 2005;24(11):1713–23. https://doi.org/10.1002/sim.2059. Deeks J, Higgins JPT, Altman D (editors) on behalf of the Cochrane Statistical Methods Group. Chapter 9: Analysing data and undertaking meta-analyses. In: Higgins JPT, Churchill R, Chandler J, Cumpston MS, editors. Cochrane handbook for systematic reviews of interventions, Version 5.2.0 (updated June 2017). http://www.training.cochrane.org/handbook. Accessed 02 February 2018. The R Foundation. The R project for statistical computing. https://www.R-project.org. Accessed 27 June 2019. IQWiG. Enzalutamid (neues Anwendungsgebiet) – Nutzenbewertung gemäß § 35a SGB V, Auftrag A14–48, Version 1.0 vom 30.03.2015. https://www.iqwig.de/download/A14-48_Enzalutamid-neues-Anwendungsgebiet_Nutzenbewertung-35a-SGB-V.pdf. Accessed 27 June 2019. Unkel S, Amiri M, Benda N, Beyersmann J, Knoerzer D, Kupas K, Langer F, Leverkus F, Loos A, Ose C, et al. On estimands and the analysis of adverse events in the presence of varying follow-up times within the benefit assessment of therapies. Pharm Stat. 2019;18(2):166–83. https://doi.org/10.1002/pst.1915. Amorim LDAF, Cai J. Modelling recurrent events: a tutorial for analysis in epidemiology. Int J Epidemiol. 2015;44(1):324–33. https://doi.org/10.1093/ije/dyu222. Wei LJ, Glidden DV. An overview of statistical methods for multiple failure time data in clinical trials. Stat Med. 1997;16(8):833–9. https://doi.org/10.1002/(SICI)1097-0258(19970430)16:8<833::AID-SIM538>3.0.CO;2-2. Hengelbrock J, Gillhaus J, Kloss S, Leverkus F. Safety data from randomized controlled trials: applying models for recurrent events. Pharm Stat. 2016;15(4):315–23. https://doi.org/10.1002/pst.1757. Schmoor C, Bender R, Beyersmann J, Kieser M, Schumacher M. Adverse event development in clinical oncology trials. Lancet Oncol. 2016;17(7):e263–4. https://doi.org/10.1016/S1470-2045(16)30223-6. We thank Ulrich Grouven for editorial support. Department of Medical Biometry, Institute for Quality and Efficiency in Health Care (IQWiG), Im Mediapark 8, D–50670, Cologne, Germany Ralf Bender & Lars Beckmann Faculty of Medicine, University of Cologne, Cologne, Germany Ralf Bender Lars Beckmann RB conceived the concept of the study. LB carried out the simulations. Both authors drafted and reviewed the manuscript. Both authors have been involved in revisions and read and approved the final manuscript. Correspondence to Ralf Bender. Bender, R., Beckmann, L. Limitations of the incidence density ratio as approximation of the hazard ratio. Trials 20, 485 (2019). https://doi.org/10.1186/s13063-019-3590-2 Hazard function Randomized controlled trials Time-to-event data
CommonCrawl
What is Cellulose? Cellulose is a biopolymer made from repeating monomer units of $\beta$-glucose. Shown to the right is the three dimensional structure of an $\alpha$-glucose and $\beta$-glucose monomer. Often cellulose is used in plants in order to provide strength in plant walls. Cellulose is present in most plant matter and can be appropriated in chairs, wooden furniture and other such wooden construction (cellulose comprises of about 40-50% of wood). An important use of cellulose is the production of glucose monomer units in what's called cellulolysis which is a hydrolysis process. These glucose monomer units may be fermented at a later stage to produce ethanol which is an industrially important petrochemical. Inter and Intra-molecular Structure and Properties Cellulose is a very strong polymer, shown right is a structural diagram of cellulose. Cellulose is a condensation polymer and is made via a process called a condensation polymerisation or a series of condensation reactions. A condensation reaction involves the reaction between two or more monomers with the elimination of a small molecule which is in this case, but not always, water. Due to cellulose's structure this polymer is particularly hard and strong, due to the molecule size cellulose is insoluble in water even though glucose monomers are. The formation of the glycosidic bonds between each monomer in alternating angles increases the stress limits of cellulose cue to compression and contraction. In addition to this the strong intra-molecular forces between the hydrogen bonding of the external hydroxy groups make cellulose particularly brittle and resistant to elastic forces. Hence due to these factors cellulose is an extremely strong biopolymer.
CommonCrawl